Lagrange multiplier in constrained dynamics

Started by
11 comments, last by Ivan Reinaldo 7 years, 11 months ago

I'm stuck at trying to understand how Lagrangian multiplier fits into all these things in collision resolution using sequential impulse. (needed for school presentation, and I'm pretty desperate lol)

I learn mostly from this tutorial at toptal ( which I don't understand starting from equality constraint. ) and Erin Catto slides.

As far as I understand, Lagrangian multiplier is used to solve constrained optimization problem, in which the function to be optimized need to lies on a constant made by another function (for example, minimize f(x,y) where g(x,y) = 0), whereas in the tutorial, it says what being minimized is actually the constraint function itself (quoted: " In other words, we want to minimize C.")

what am I missing here?

Advertisement

In a velocity-based formulation the Lagrangian multiplier is the magnitude of a constraint impulse, whereas in a acceleration-based formulation is the magnitude of a constraint-force. It is required to put the constraint into a valid state. Having the constraint configuration defined, it shouldn't be difficult to solve for it currently.

I don't understand what is the second question though. For example, an equality constraint (e.g. distance, ball-in-socket, or a revolute joint) is satisfied for C = 0. A inequality constraint (i.e. contact, angle limits) is satisfied when lo <= C <= hi (actually that is the general form of a constraint). Therefore generally is valid to say we want to minimize C so that it remais satisfied all the time.

Yeah, minimize g(x(t)).

BTW, that's a good article and he basically summarized some of Catto's and Witkin's papers intuitively. :)

In a velocity-based formulation the Lagrangian multiplier is the magnitude of a constraint impulse, whereas in a acceleration-based formulation is the magnitude of a constraint-force. It is required to put the constraint into a valid state. Having the constraint configuration defined, it shouldn't be difficult to solve for it currently.

I don't understand what is the second question though. For example, an equality constraint (e.g. distance, ball-in-socket, or a revolute joint) is satisfied for C = 0. A inequality constraint (i.e. contact, angle limits) is satisfied when lo <= C <= hi (actually that is the general form of a constraint). Therefore generally is valid to say we want to minimize C so that it remais satisfied all the time.

Yeah, minimize g(x(t)).

BTW, that's a good article and he basically summarized some of Catto's and Witkin's papers intuitively. :)

Hi Irlan, thank you for your response

I still don't understand, how come we minimize the constraint function?

this is what I understand about langrange multiplier from calculus class (picture in attachment)

in the picture, it's the case if I try to minimize F, on constraint C, and by the way it's a contour plot.

did I make a mistake somewhere? can you please describe or make an analogy based on that picture?

and yeah, I agree it's a really good article, full with pastel color and stuff :D

I haven't clicked on the link to the tutorial, but I can explain how Lagrange multipliers work.

You want to minimize f(x,y) subject to g(x,y) = 0. We'll introduce an additional variable l (usually "lambda", but I don't have that letter available). Let's look at the function

L(x,y,l) = f(x,y) - l*g(x,y)

and imagine you've found a point where the derivative of L with respect to each of the three variables is 0. You then have

dL/dx = df/dx - l*dg/dx = 0
dL/dy = df/dy - l*dg/dx = 0
dL/dl = g(x,y) = 0

The last condition guarantees that the constraint is being satisfied. The other two basically say that the levels of f and g are tangent.

These are necessary conditions for a point (x,y) to be a solution to your problem. Sufficient conditions do exist, but they are a bit trickier to think about, and this may or may not be important in your case.

I haven't clicked on the link to the tutorial, but I can explain how Lagrange multipliers work.

You want to minimize f(x,y) subject to g(x,y) = 0. We'll introduce an additional variable l (usually "lambda", but I don't have that letter available). Let's look at the function

L(x,y,l) = f(x,y) - l*g(x,y)

and imagine you've found a point where the derivative of L with respect to each of the three variables is 0. You then have

dL/dx = df/dx - l*dg/dx = 0
dL/dy = df/dy - l*dg/dx = 0
dL/dl = g(x,y) = 0

The last condition guarantees that the constraint is being satisfied. The other two basically say that the levels of f and g are tangent.

These are necessary conditions for a point (x,y) to be a solution to your problem. Sufficient conditions do exist, but they are a bit trickier to think about, and this may or may not be important in your case.

@alvaro

thank you for your response. Yes, that is exactly the lagrange multiplier that I know.

What I don't understand is how it applies to this case, in the tutorial, it says that what being minimized is the constraint function itself.

whereas in your example, (and anywhere I see), we minimize F based on constraint function G.

I've been at this for weeks, and at this point I'm starting to doubt it's referring to the same 'lagrange' lol

where did I go wrong?

I read parts of the tutorial and I think that way of thinking of Lagrange multipliers is probably very useful. The part you quoted about minimizing C seems wrong, though.

Yeah, agree with Álvaro, he's mathematically wrong. He meant minimizing C towards zero for his equality (although that is wrong either; he meant keeping the constraint between the constraint bounds).

Yeah, agree with Álvaro, he's mathematically wrong. He meant minimizing C towards zero for his equality (although that is wrong either; he meant keeping the constraint between the constraint bounds).

I read parts of the tutorial and I think that way of thinking of Lagrange multipliers is probably very useful. The part you quoted about minimizing C seems wrong, though.

so it's wrong? how exactly Lagrangian multiplier is supposed to be used in this case?

I've looked anywhere and explanation seems very unclear to me.. :/

allenchou.net/2013/12/game-physics-constraints-sequential-impulse/

I think this is all related to Lagrange's equations (1. kind). You are not minimizing the constraint function but the Lagrange function which (IIRC!) the sum of potential and kinetic energy:

https://en.wikipedia.org/wiki/Lagrangian_mechanics

I always found it difficult to find good material in English on this. In German there are tons of good resources...

I think this is all related to Lagrange's equations (1. kind). You are not minimizing the constraint function but the Lagrange function which (IIRC!) the sum of potential and kinetic energy:

https://en.wikipedia.org/wiki/Lagrangian_mechanics

I always found it difficult to find good material in English on this. In German there are tons of good resources...

@Dirk

thank you for your response :)

There's the same case of 'bead on a wire' constraint in that wikipedia page, so I'm guessing you're correct, although I can't really understand it, maybe after I study it.

I never study Lagrangian mechanics, but from what I've read in that wikipedia link you gave, is it just how we would rewrite things?

If that's the case, is this the equivalent in Newtonian?

in that video,basically, we look for collision normal, because that's the impulse direction, and then we compute magnitude of the impulse by relating it with "relative velocity"

by the way Dirk, your slides on collision detection at GDC helped me tremendously in implementing SAT, thanks :D

This topic is closed to new replies.

Advertisement