• # Understanding Constraint Resolution in Physics Engine

Math and Physics

Original Post: Limitless Curiosity

Out of various phases of the physics engine. Constraint Resolution was the hardest for me to understand personally. I need to read a lot of different papers and articles to fully understand how constraint resolution works. So I decided to write this article to help me understand it more easily in the future if, for example, I forget how this works.

This article will tackle this problem by giving an example then make a general formula out of it. So let us delve into a pretty common scenario when two of our rigid bodies collide and penetrate each other as depicted below.

From the scenario above we can formulate:

We don't want our rigid bodies to intersect each other, thus we construct a constraint where the penetration depth must be more than zero.

$$C: d>=0$$

This is an inequality constraint, we can transform it to a more simple equality constraint by only solving it if two bodies are penetrating each other. If two rigid bodies don't collide with each other, we don't need any constraint resolution. So:

if d>=0, do nothing else if d < 0 solve C: d = 0

Now we can solve this equation by calculating  $$\Delta \vec{p1},\Delta \vec{p2},\Delta \vec{r1}$$,and  $$\Delta \vec{r2}$$  that cause the constraint above satisfied. This method is called the position-based method. This will satisfy the above constraint immediately in the current frame and might cause a jittery effect.

A much more modern and preferable method that is used in Box2d, Chipmunk, Bullet and my physics engine is called the impulse-based method. In this method, we derive a velocity constraint equation from the position constraint equation above.

We are working in 2D so angular velocity and the cross result of two vectors are scalars.

Next, we need to find $$\Delta V$$ or impulse to satisfy the velocity constraint. This $$\Delta V$$ is caused by a force. We call this force 'constraint force'. Constraint force only exerts a force on the direction of illegal movement in our case the penetration normal. We don't want this force to do any work, contribute or restrict any motion of legal direction.

$$\lambda$$ is a scalar, called Lagrangian multiplier. To understand why constraint force working on $$J^{T}$$ direction (remember J is a 12 by 1 matrix, so $$J^{T}$$ is a 1 by 12 matrix or a 12-dimensional vector), try to remember the equation for a three-dimensional plane.

Now we can draw similarity between equation(1) and equation(2), where $$\vec{n}^{T}$$ is similar to J and $$\vec{v}$$ is similar to V. So we can interpret equation(1) as a 12 dimensional plane, we can conclude that $$J^{T}$$ as the normal of this plane. If a point is outside a plane, the shortest distance from this point to the surface is the normal direction.

After we calculate the Lagrangian multiplier, we have a way to get back the impulse from equation(3). Then, we can apply this impulse to each rigid body.

#### Baumgarte Stabilization

Note that solving the velocity constraint doesn't mean that we satisfy the position constraint. When we solve the velocity constraint, there is already a violation in the position constraint. We call this violation position drift. What we achieve is stopping the two bodies from penetrating deeper (The penetration depth will stop growing). It might be fine for a slow-moving object as the position drift is not noticeable, but it will be a problem as the object moving faster. The animation below demonstrates what happens when we solve the velocity constraint.

[caption id="attachment_38" align="alignnone" width="800"]

So instead of purely solving the velocity constraint, we add a bias term to fix any violation that happens in position constraint.

So what is the value of the bias? As mentioned before we need this bias to fix positional drift. So we want this bias to be in proportion to penetration depth.

This method is called Baumgarte Stabilization and $$\beta$$ is a baumgarte term. The right value for this term might differ for different scenarios. We need to tweak this value between 0 and 1 to find the right value that makes our simulation stable.

#### Sequential Impulse

If our world consists only of two rigid bodies and one contact constraint. Then the above method will work decently. But in most games, there are more than two rigid bodies. One body can collide and penetrate with two or more bodies. We need to satisfy all the contact constraint simultaneously. For a real-time application, solving all these constraints simultaneously is not feasible. Erin Catto proposes a practical solution, called sequential impulse. The idea here is similar to Project Gauss-Seidel. We calculate $$\lambda$$ and $$\Delta V$$ for each constraint one by one, from constraint one to constraint n(n = number of constraint). After we finish iterating through the constraints and calculate $$\Delta V$$, we repeat the process from constraint one to constraint n until the specified number of iteration. This algorithm will converge to the actual solution.The more we repeat the process, the more accurate the result will be. In Box2d, Erin Catto set ten as the default for the number of iteration.

Another thing to notice is that while we satisfy one constraint we might unintentionally satisfy another constraint. Say for example that we have two different contact constraint on the same rigid body.

When we solve $$\dot{C1}$$, we might incidentally make $$\dot{d2} >= 0$$. Remember that equation(5), is a formula for $$\dot{C}: \dot{d} = 0$$ not $$\dot{C}: \dot{d} >= 0$$. So we don't need to apply it to $$\dot{C2}$$ anymore. We can detect this by looking at the sign of $$\lambda$$. If the sign of $$\lambda$$ is negative, that means the constraint is already satisfied. If we use this negative lambda as an impulse, it means we pull it closer instead of pushing it apart. It is fine for individual $$\lambda$$ to be negative. But, we need to make sure the accumulation of $$\lambda$$ is not negative. In each iteration, we add the current lambda to normalImpulseSum. Then we clamp the normalImpulseSum between 0 and positive infinity. The actual Lagrangian multiplier that we will use to calculate the impulse is the difference between the new normalImpulseSum and the previous normalImpulseSum

#### Restitution

Okay, now we have successfully resolve contact penetration in our physics engine. But what about simulating objects that bounce when a collision happens. The property to bounce when a collision happens is called restitution. The coefficient of restitution denoted $$C_{r}$$, is the ratio of the parting speed after the collision and the closing speed before the collision.

The coefficient of restitution only affects the velocity along the normal direction. So we need to do the dot operation with the normal vector.

Notice that in this specific case the $$V_{initial}$$ is similar to JV. If we look back at our constraint above, we set $$\dot{d}$$ to zero because we assume that the object does not bounce back($$C_{r}=0$$).So, if $$C_{r} != 0$$, instead of 0, we can modify our constraint so the desired velocity is $$V_{final}$$.

We can merge our old bias term with the restitution term to get a new bias value.

// init constraint
// Calculate J(M^-1)(J^T). This term is constant so we can calculate this first
for (int i = 0; i < constraint->numContactPoint; i++)
{
ftContactPointConstraint *pointConstraint = &constraint->pointConstraint;

pointConstraint->r1 = manifold->contactPoints.r1 - (bodyA->transform.center + bodyA->centerOfMass);
pointConstraint->r2 = manifold->contactPoints.r2 - (bodyB->transform.center + bodyB->centerOfMass);

real kNormal = bodyA->inverseMass + bodyB->inverseMass;

// Calculate r X normal
real rnA = pointConstraint->r1.cross(constraint->normal);
real rnB = pointConstraint->r2.cross(constraint->normal);

// Calculate J(M^-1)(J^T).
kNormal += (bodyA->inverseMoment * rnA * rnA + bodyB->inverseMoment * rnB * rnB);

// Save inverse of J(M^-1)(J^T).
pointConstraint->normalMass = 1 / kNormal;

pointConstraint->positionBias = m_option.baumgarteCoef *

manifold->penetrationDepth;
ftVector2 vA = bodyA->velocity;
ftVector2 vB = bodyB->velocity;
real wA = bodyA->angularVelocity;
real wB = bodyB->angularVelocity;
ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA));
//Calculate JV
real jnV = dv.dot(constraint->normal
pointConstraint->restitutionBias = -restitution * (jnV + m_option.restitutionSlop);
}

// solve constraint
while (numIteration > 0)
{

for (int i = 0; i < m_constraintGroup.nConstraint; ++i) {
ftContactConstraint *constraint = &(m_constraintGroup.constraints); int32 bodyIDA = constraint->bodyIDA;
int32 bodyIDB = constraint->bodyIDB;
ftVector2 normal = constraint->normal;
ftVector2 tangent = normal.tangent();

for (int j = 0; j < constraint->numContactPoint; ++j)
{
ftContactPointConstraint *pointConstraint = &(constraint->pointConstraint[j]);

ftVector2 vA = m_constraintGroup.velocities[bodyIDA];
ftVector2 vB = m_constraintGroup.velocities[bodyIDB];
real wA = m_constraintGroup.angularVelocities[bodyIDA];
real wB = m_constraintGroup.angularVelocities[bodyIDB];

//Calculate JV. (jnV = JV, dv = derivative of d, JV = derivative(d) dot normal))
ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA));
real jnV = dv.dot(normal);

//Calculate Lambda ( lambda
real nLambda = (-jnV + pointConstraint->positionBias / dt + pointConstraint->restitutionBias) *
pointConstraint->normalMass;

// Add lambda to normalImpulse and clamp
real oldAccumI = pointConstraint->nIAcc;
pointConstraint->nIAcc += nLambda;
if (pointConstraint->nIAcc < 0) { pointConstraint->nIAcc = 0;
}
// Find real lambda
real I = pointConstraint->nIAcc - oldAccumI;

// Calculate linear impulse
ftVector2 nLinearI = normal * I;

// Calculate angular impulse
real rnA = pointConstraint->r1.cross(normal);
real rnB = pointConstraint->r2.cross(normal);
real nAngularIA = rnA * I;
real nAngularIB = rnB * I;

// Apply linear impulse
m_constraintGroup.velocities[bodyIDA] -= constraint->invMassA * nLinearI;
m_constraintGroup.velocities[bodyIDB] += constraint->invMassB * nLinearI;

// Apply angular impulse
m_constraintGroup.angularVelocities[bodyIDA] -= constraint->invMomentA * nAngularIA;
m_constraintGroup.angularVelocities[bodyIDB] += constraint->invMomentB * nAngularIB;
}
}
--numIteration;
}

#### General Step to Solve Constraint

In this article, we have learned how to solve contact penetration by defining it as a constraint and solve it. But this framework is not only used to solve contact penetration. We can do many more cool things with constraints like for example implementing hinge joint, pulley, spring, etc.

So this is the step-by-step of constraint resolution:

1. Define the constraint in the form $$\dot{C}: JV + b = 0$$. is always $$\begin{bmatrix} \vec{v1} \\ w1 \\ \vec{v2} \\ w2\end{bmatrix}$$ for every constraint. So we need to find or the Jacobian Matrix for that specific constraint.
2. Decide the number of iteration for the sequential impulse.
3. Next find the Lagrangian multiplier by inserting velocity, mass, and the Jacobian Matrix into this equation:
4. Do step 3 for each constraint, and repeat the process as much as the number of iteration.
5. Clamp the Lagrangian multiplier if needed.

This marks the end of this article. Feel free to ask if something is still unclear. And please inform me if there are inaccuracies in my article. Thank you for reading.

NB: Box2d use sequential impulse, but does not use baumgarte stabilization anymore. It uses full NGS to resolve the position drift. Chipmunk still use baumgarte stabilization.

References

1. Allen Chou's post on Constraint Resolution
2. A Unified Framework for Rigid Body Dynamics
3. An Introduction to Physically Based Modeling: Constrained Dynamics
4. Erin Catto's Box2d and presentation on constraint resolution

Report Article

## User Feedback

Some of your formulas are shown as text and not as formular, can you double check those?

##### Share on other sites

@Finalspace Sorry. I think there are bugs in the equation editor, and it made me really frustrated, so I kinda gave up back then. Now I already understand how to insert equation without the editor. So thank you for reminding me.

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

• 5
• 0
• 4
• 0

• 10
• 13
• 14
• 68
• 14
• ### Similar Content

• Hello folks,
I am fighting with a really simple problem that has been driving me nuts. I'm usually fairly confident with trigonometry and I'm pretty sure the issue here is not the math but in some behavior behind the scenes.
A bit of background - basically I'm developing my own little 2D game engine using C++, in my free-time for fun. I work with game technology and simulation products IRL so for the most part this has been plain sailing. I am using Box2D for physics, and SFML for rendering.
My game at current has a few space ships flying around via keyboard and AI input. These space ships need exhausts! So I'm currently in the process of getting a 'component' system in place, however my problem is that the animated exhaust sprites are not going where I want them to.
When the base GameObject class runs it's update method, it does the following:
Queries the physics object for a position Converts the physics position to a pixel position (x10 scale factor) Moves the sprite to the new position Passes new position to all 'component' objects into their 'UpdateComponent' method Components move their sprite to the correct position However the exhaust sprites seem to wander around all over the place, in no way representing where I actually want them to be! This should have been a simple case of:
newPosition = (sin(rotationRadians) * offsetX, cos(rotationRadians) * offsetY) + oldPosition Here is the current code:
void Exhaust::UpdateComponent(const Vec2f & position, float rotationDegrees, float deltaTime) { float angleRad = DEGREES_TO_RADIANS(rotationDegrees); float offsetX = sin(angleRad) * mOffset.x; float offsetY = cos(angleRad) * mOffset.y; sf::Vector2f newPos(position.x + offsetX, position.y + offsetY); mSprite->SetRotation(rotationDegrees); mSprite->SetPosition(newPos); mSprite->Update(deltaTime); #ifdef _DEBUG sf::Vector2f p(position.x, position.y); stringstream ss; ss << rotationDegrees; DebugUtils::DrawCross(p, 8, sf::Color::Green, ss.str()); DebugUtils::DrawCross(newPos, 8, sf::Color::Green); DebugUtils::DrawLine(p, newPos, sf::Color::Green); cout << rotationDegrees << endl; DebugUtils::DrawSpriteOutline(*mSprite->GetGraphic(), sf::Color::Green); #endif // _DEBUG }
And here is a gif of it all going wrong! (link below)
Gif of game engine problem
Any help would be much appreciated, even if it is just a thought of where the problem might be originating from!
Thanks!
• By JesseTG
A Really Useful Box2D Editor
You're a game developer. Your next project will have Box2D-powered physics, and it'll be awesome. So you wrestle with installers, build systems, possibly with compiling the library itself, and then get a rectangle moving around on-screen. Now what? If your game is non-trivial, you could have dozens of different types of objects. You're not going to have a good time describing them in code, as you'd have to recompile every time your designer (which could be you) blows his/her nose. You could come up with your own loader, but then you'd have to come up with your own file format, too, and that's time not spent actually making your game!
Enter R.U.B.E., the Really Useful Box2D Editor. R.U.B.E. is a level designer for virtually any game that uses the Box2D physics engine. You can use it to visually create a wide variety of collision shapes, place them on-screen, and edit their properties. R.U.B.E. also boasts a comprehensive scripting system, support for nearly all Box2D constructs, and extensive customizability.

The Setup
I use R.U.B.E. on Ubuntu 14.04 and a reasonably beefy 2011-era Samsung laptop. R.U.B.E. is available for Windows, Mac, and Linux. The Mac version, however, does not officially support 64-bit computers; it might or might not work. Likewise for any Linux distribution other than Ubuntu. When in doubt, see if the trial version works. I'm using R.U.B.E. to develop a top-down shooter written in Java and libGDX.
The Editor
The user interface for a program like this should aim to minimize the barrier between the designer's ideas and his/her implementation. Fortunately, R.U.B.E.'s editor does a decent enough job of that. The controls for the editor itself are a bit clunky, relying on keyboard shortcuts to perform basic geometric transformations (e.g. pressing S to scale an object, as opposed to just grabbing a handle like in Unity or Inkscape). A bit annoying, but you'll get used to it. Other features, like the ability to position the menus freely, or the ability to copy objects to other programs as plain text, or even to run the Box2D engine to see how objects move in the world, are very much welcome. I don't think you'll be wrestling with the program too much.
The Scripting
My limited experience with RubeScript, the built-in scripting engine derived from AngelScript, was a pleasant one. The language itself is nothing spectacular, preferring simplicity for programmers and non-programmers alike over flashy features. Better documentation for the built-in types (strings, arrays, etc.) and code completion would've been nice, at least.
What really makes RubeScript stand out is its thorough functionality. Almost all of the built-in editor actions--adding or deleting bodies and joints, smoothing or removing vertices--are actually implemented in RubeScript, and can be viewed in R.U.B.E.'s installation. If the developer himself can write the meat of R.U.B.E. in its own scripting system, I'm sure you can get something out of it, too. Tools like this are why game designers benefit from some programming skills.
The Competition
Minimal. Most available alternatives lack some of R.U.B.E.'s most defining features. These other editors are only designed for creating simple game objects (such as PhysicsEditor, which doesn't support joints), or lack adequate import/export facilities (e.g. BoxCAD, whose output requires ActionScript 3.0 to be used). Other more generic solutions such as Tiled or even Inkscape might suit your needs if you can write a loader to convert their output to Box2D worlds. I don't feel that most of these other alternatives fill the need for a 2D physics world editor, though for very small games you may disagree.
The Surrounding Ecosystem
I have nothing but good things to say here. The official documentation for R.U.B.E. is excellent. Even if you ignore the built-in manual (which is very detailed and contains plenty of images), there's a wealth of resources available on the official website, including sample projects and video tutorials.
But perhaps best of all, Chris Campbell (the developer) actually gives a damn about you. He reads the forum and any e-mail he gets. It's easy to report bugs or offer feedback from within R.U.B.E. itself. I can send a hastily-written bug report or feature request through through R.U.B.E. and continue working before my concentration wavers. You'll actually get a response from Chris within a reasonable time frame, and he'll make sure you get the best out of R.U.B.E. you possibly can. He's helped me numerous times with reporting bugs, feature requests, and even general Box2D questions.
The main problem with R.U.B.E., as I see it, is that your game's technologies will be limited to whatever has a R.U.B.E. loader and a Box2D port available. This issue isn't exclusive to R.U.B.E.--it's inherent in any technology that's not as mainstream as, say, PNGs or WAVs. While R.U.B.E. does have loaders available for many popular game frameworks and programming languages, if none of them suit your needs you'll either have to reconsider your toolkit or write your own loader.
The Bottom Line
A designer alone may not have much use for R.U.B.E., because odds are if he/she is using a prefab game engine like Game Maker or Unity he/she'll be using its respective built-in editor. But for programmers or designers working with programmers, R.U.B.E. is a solid investment. R.U.B.E. is flexible enough to play well with any custom game engine, yet simple enough to make entertaining game worlds with ease. I wholeheartedly recommend taking the \$30 plunge.
• By EBBLER
Hello,

i am developing an android game with libgdx and box2d. I made a simple enemy that walks towards the player.
I can shoot bullets and move the bullets like this:
body.applyForce(m_impulse,body.getWorldCenter(),true); // This is applied only once This moves the bullet straight forward, that is what I want. But when the ( small ) bullet hits an enemy, he gets thrown back like 100kilometers. How can i prevent that?
• By dmatter
This is a technical article about how I implemented the fluid in my game “Invasion of the Liquid Snatchers!” which was my entry for the fifth annual "Week of Awesome " game development competition here at GameDev.net.
One of the biggest compliments I’ve received about the game is when people think the fluid simulation is some kind of soft-body physics or a true fluid simulation. But it isn’t!
The simulation is achieved using Box2D doing regular hard-body collisions using lots of little (non-rotating) circle-shaped bodies. The illusion of soft-body particles is achieved in the rendering.

The Rendering Process
Each particle is drawn using a texture of a white circle that is opaque in the center but fades to fully transparent at the circumference:

These are drawn to a RGBA8888 off-screen texture (using a ‘framebuffer’ in OpenGL parlance) and I ‘tint’ to the intended color of the particle (tinting is something that LibGDX can do out-of-the-box with its default shader).
It is crucial to draw each ball larger than it is represented in Box2D. Physically speaking these balls will not overlap (because it’s a hard-body simulation after all!) yet in the rendering, we do need these balls to overlap and blend together.
The blending is non-trivial as there are a few requirements we have to take into account:
- The RGB color channels should blend together when particles of different colors overlap.
-- … but we don’t want colors to saturate towards white.
-- … and we don’t want them to darken when we blend with the initially black background color.
- The alpha channel should accumulate additively to indicate the ‘strength’ of the liquid at each pixel.
All of that can be achieved in GLES2.0 using this blending technique:
glClearColor(0, 0, 0, 0); glBlendFuncSeparate(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE); Putting all that together gets us a texture of lots of blurry colored balls:

Next up, is to contribute this to the main backbuffer as a full-screen quad using a custom shader.
The shader treats the alpha channel of the texture as a ‘potential field’, the higher the value the stronger the field is at that fragment.
The shader compares the strength of the field to a threshold:
Where this field strength is strong enough then we will snap the alpha to 1.0 to manifest some liquid. Where the field strength is too weak then we will snap the alpha to 0.0 (or we could just discard the fragment) to avoid drawing anything.
For the final game I went a little further and also included a small window around that threshold to smoothly blend between 0 and 1 in the alpha channel, this softens and effectively anti-aliases the fluid boundary.
varying vec2 v_texCoords; uniform sampler2D u_texture; // field values above this are 'inside' the fluid, otherwise we are 'outside'. const float threshold = 0.6; // +/- this window around the threshold for a smooth transition around the boundary. const float window = 0.1; void main() { vec4 col = texture2D(u_texture, v_texCoords); float fieldStrength = col.a; col.a = smoothstep(threshold - window, threshold + window, fieldStrength); gl_FragColor = col; }
This gives us a solid edge boundary where pixels are either lit or not lit by the fluid.
Here is the result after we apply the shader:

Things are looking a lot more liquid-like now!
The way this works is that when particles come within close proximity of each other their potential fields start to add up; once the field strength is high enough the shader will start lighting up pixels between the two particles. This gives us the ‘globbing together’ effect which really makes it look like a fluid.
Since the fluid is comprised of thousands of rounded shapes it tends to leave gaps against the straight-edged tilemap. So the full-screen quad is, in fact, scaled-up to be just a little bit larger than the screen and is draw behind the main scene elements. This helps to ensure that the liquid really fills up any corners and crevices.
Here is the final result:

And that’s all there is for the basic technique behind it!

Extra Niceties
I do a few other subtle tricks which help to make the fluids feel more believable…
Each particle has an age and a current speed. I weight these together into a ‘froth-factor’ value between 0 and 1 that is used to lighten the color of a particle. This means that younger or faster-moving particles are whiter than older or stationary parts of the fluid. The idea is to allow us to see particles mixing into a larger body of fluid. The stationary ‘wells’ where fluid collects are always a slightly darker shade compared to the fluid particles. This guarantees that we can see the particles ‘mixing’ when they drop into the wells. Magma particles are all different shades of dark red selected randomly at spawn time. This started out as a bug where magma and oil particles were being accidentally mixed together but it looked so cool that I decided to make it happen deliberately! When I remove a particle from the simulation it doesn’t just pop out of existence, instead, I fade it away. This gets further disguised by the ‘potential field’ shader which makes it look like the fluid drains or shrinks away more naturally. So, on the whole, the fading is not directly observable.
Performance Optimisations
As mentioned in my post-mortem of the game I had to dedicate some time to make the simulation CPU and Memory performant:
The ‘wells’ that receive the fluid are really just colored rectangles that “fill up”. They are not simulated. It means I can remove particles from the simulation once they are captured by the wells and just increment the fill-level of the well. If particles slow down below a threshold then they are turned into non-moving static bodies. Statics are not exactly very fluid-like but they perform much better in Box2D than thousands of dynamic bodies because they don’t respond to forces. I also trigger their decay at that point too, so they don’t hang around in this state for long enough for the player to notice. All particles will eventually decay. I set a max lifetime of 20-seconds. This is also to prevent the player from just flooding the level and cheating their way through the game. To keep Java’s Garbage Collector from stalling the gameplay I had to avoid doing memory allocations per-particle where possible. Mainly this is for things like allocating temporary Vector2 objects or Color objects. So I factored these out into singular long-lived instances and just (re)set their state per-particle. Note: This article was originally published on the author's blog, and is reproduced here with the author's kind permission.
• By zedz
Ive had a look at the demo but couldnt really glean the info from the demo suite One thing ive noticed over the years is 3d physics libraries crash + burn in the following situation. polygon mesh soup (doesnt even even have to be high polygon count) 100s of objects (spheres/boxes etc) in close proximity inside this meshsoup The reason they crash + burn is cause they go for accuracy over speed (which is laudable) but doesnt make for responsiveness. Im looking at something similar to this could box2d handle say 500 circles/boxs within this simple geometry say 1msec MAX on 2ghz machine? It doesnt have to give 100% accurate results just similar to what I have now, if a sphape enters a polygon, push it out cheers zed