Jump to content
  • Advertisement
  • 01/21/18 10:47 PM

    Understanding Constraint Resolution in Physics Engine

    Math and Physics


    Original Post: Limitless Curiosity

    Out of various phases of the physics engine. Constraint Resolution was the hardest for me to understand personally. I need to read a lot of different papers and articles to fully understand how constraint resolution works. So I decided to write this article to help me understand it more easily in the future if, for example, I forget how this works.

    This article will tackle this problem by giving an example then make a general formula out of it. So let us delve into a pretty common scenario when two of our rigid bodies collide and penetrate each other as depicted below.


    From the scenario above we can formulate:


    We don't want our rigid bodies to intersect each other, thus we construct a constraint where the penetration depth must be more than zero.

    \(C: d>=0\)

    This is an inequality constraint, we can transform it to a more simple equality constraint by only solving it if two bodies are penetrating each other. If two rigid bodies don't collide with each other, we don't need any constraint resolution. So:

    if d>=0, do nothing else if d < 0 solve C: d = 0

    Now we can solve this equation by calculating  \( \Delta \vec{p1},\Delta \vec{p2},\Delta \vec{r1}\),and  \( \Delta \vec{r2}\)  that cause the constraint above satisfied. This method is called the position-based method. This will satisfy the above constraint immediately in the current frame and might cause a jittery effect.

    A much more modern and preferable method that is used in Box2d, Chipmunk, Bullet and my physics engine is called the impulse-based method. In this method, we derive a velocity constraint equation from the position constraint equation above.


    We are working in 2D so angular velocity and the cross result of two vectors are scalars.

    Next, we need to find \(\Delta V\) or impulse to satisfy the velocity constraint. This \(\Delta V\) is caused by a force. We call this force 'constraint force'. Constraint force only exerts a force on the direction of illegal movement in our case the penetration normal. We don't want this force to do any work, contribute or restrict any motion of legal direction.


    \(\lambda\) is a scalar, called Lagrangian multiplier. To understand why constraint force working on \(J^{T}\) direction (remember J is a 12 by 1 matrix, so \(J^{T}\) is a 1 by 12 matrix or a 12-dimensional vector), try to remember the equation for a three-dimensional plane.


    Now we can draw similarity between equation(1) and equation(2), where \(\vec{n}^{T}\) is similar to J and \(\vec{v}\) is similar to V. So we can interpret equation(1) as a 12 dimensional plane, we can conclude that \(J^{T}\) as the normal of this plane. If a point is outside a plane, the shortest distance from this point to the surface is the normal direction.


    After we calculate the Lagrangian multiplier, we have a way to get back the impulse from equation(3). Then, we can apply this impulse to each rigid body.

    Baumgarte Stabilization

    Note that solving the velocity constraint doesn't mean that we satisfy the position constraint. When we solve the velocity constraint, there is already a violation in the position constraint. We call this violation position drift. What we achieve is stopping the two bodies from penetrating deeper (The penetration depth will stop growing). It might be fine for a slow-moving object as the position drift is not noticeable, but it will be a problem as the object moving faster. The animation below demonstrates what happens when we solve the velocity constraint.

    [caption id="attachment_38" align="alignnone" width="800"]WithoutBias.gif

    So instead of purely solving the velocity constraint, we add a bias term to fix any violation that happens in position constraint. 


    So what is the value of the bias? As mentioned before we need this bias to fix positional drift. So we want this bias to be in proportion to penetration depth.


    This method is called Baumgarte Stabilization and \(\beta\) is a baumgarte term. The right value for this term might differ for different scenarios. We need to tweak this value between 0 and 1 to find the right value that makes our simulation stable.



    Sequential Impulse

    If our world consists only of two rigid bodies and one contact constraint. Then the above method will work decently. But in most games, there are more than two rigid bodies. One body can collide and penetrate with two or more bodies. We need to satisfy all the contact constraint simultaneously. For a real-time application, solving all these constraints simultaneously is not feasible. Erin Catto proposes a practical solution, called sequential impulse. The idea here is similar to Project Gauss-Seidel. We calculate \(\lambda\) and \(\Delta V\) for each constraint one by one, from constraint one to constraint n(n = number of constraint). After we finish iterating through the constraints and calculate \(\Delta V\), we repeat the process from constraint one to constraint n until the specified number of iteration. This algorithm will converge to the actual solution.The more we repeat the process, the more accurate the result will be. In Box2d, Erin Catto set ten as the default for the number of iteration.

    Another thing to notice is that while we satisfy one constraint we might unintentionally satisfy another constraint. Say for example that we have two different contact constraint on the same rigid body.


    When we solve \(\dot{C1}\), we might incidentally make \(\dot{d2} >= 0\). Remember that equation(5), is a formula for \(\dot{C}: \dot{d} = 0\) not \(\dot{C}: \dot{d} >= 0\). So we don't need to apply it to \(\dot{C2}\) anymore. We can detect this by looking at the sign of \(\lambda\). If the sign of \(\lambda\) is negative, that means the constraint is already satisfied. If we use this negative lambda as an impulse, it means we pull it closer instead of pushing it apart. It is fine for individual \(\lambda\) to be negative. But, we need to make sure the accumulation of \(\lambda\) is not negative. In each iteration, we add the current lambda to normalImpulseSum. Then we clamp the normalImpulseSum between 0 and positive infinity. The actual Lagrangian multiplier that we will use to calculate the impulse is the difference between the new normalImpulseSum and the previous normalImpulseSum


    Okay, now we have successfully resolve contact penetration in our physics engine. But what about simulating objects that bounce when a collision happens. The property to bounce when a collision happens is called restitution. The coefficient of restitution denoted \(C_{r}\), is the ratio of the parting speed after the collision and the closing speed before the collision.


    The coefficient of restitution only affects the velocity along the normal direction. So we need to do the dot operation with the normal vector.


    Notice that in this specific case the \(V_{initial}\) is similar to JV. If we look back at our constraint above, we set \(\dot{d}\) to zero because we assume that the object does not bounce back(\(C_{r}=0\)).So, if \(C_{r} != 0\), instead of 0, we can modify our constraint so the desired velocity is \(V_{final}\).


    We can merge our old bias term with the restitution term to get a new bias value.


    // init constraint
    // Calculate J(M^-1)(J^T). This term is constant so we can calculate this first
    for (int i = 0; i < constraint->numContactPoint; i++)
      ftContactPointConstraint *pointConstraint = &constraint->pointConstraint;
      pointConstraint->r1 = manifold->contactPoints.r1 - (bodyA->transform.center + bodyA->centerOfMass);
      pointConstraint->r2 = manifold->contactPoints.r2 - (bodyB->transform.center + bodyB->centerOfMass);
      real kNormal = bodyA->inverseMass + bodyB->inverseMass;
      // Calculate r X normal
      real rnA = pointConstraint->r1.cross(constraint->normal);
      real rnB = pointConstraint->r2.cross(constraint->normal);
      // Calculate J(M^-1)(J^T).
      kNormal += (bodyA->inverseMoment * rnA * rnA + bodyB->inverseMoment * rnB * rnB);
      // Save inverse of J(M^-1)(J^T).
      pointConstraint->normalMass = 1 / kNormal;
      pointConstraint->positionBias = m_option.baumgarteCoef *
      ftVector2 vA = bodyA->velocity;
      ftVector2 vB = bodyB->velocity;
      real wA = bodyA->angularVelocity;
      real wB = bodyB->angularVelocity;
      ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA));
      //Calculate JV
      real jnV = dv.dot(constraint->normal
      pointConstraint->restitutionBias = -restitution * (jnV + m_option.restitutionSlop);
    // solve constraint
    while (numIteration > 0)
      for (int i = 0; i < m_constraintGroup.nConstraint; ++i) { 
        ftContactConstraint *constraint = &(m_constraintGroup.constraints); int32 bodyIDA = constraint->bodyIDA;
        int32 bodyIDB = constraint->bodyIDB;
        ftVector2 normal = constraint->normal;
        ftVector2 tangent = normal.tangent();
        for (int j = 0; j < constraint->numContactPoint; ++j)
          ftContactPointConstraint *pointConstraint = &(constraint->pointConstraint[j]);
          ftVector2 vA = m_constraintGroup.velocities[bodyIDA];
          ftVector2 vB = m_constraintGroup.velocities[bodyIDB];
          real wA = m_constraintGroup.angularVelocities[bodyIDA];
          real wB = m_constraintGroup.angularVelocities[bodyIDB];
          //Calculate JV. (jnV = JV, dv = derivative of d, JV = derivative(d) dot normal))
          ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA));
          real jnV = dv.dot(normal);
          //Calculate Lambda ( lambda
          real nLambda = (-jnV + pointConstraint->positionBias / dt + pointConstraint->restitutionBias) *
          // Add lambda to normalImpulse and clamp
          real oldAccumI = pointConstraint->nIAcc;
          pointConstraint->nIAcc += nLambda;
          if (pointConstraint->nIAcc < 0) { pointConstraint->nIAcc = 0;
          // Find real lambda
          real I = pointConstraint->nIAcc - oldAccumI;
          // Calculate linear impulse
          ftVector2 nLinearI = normal * I;
          // Calculate angular impulse
          real rnA = pointConstraint->r1.cross(normal);
          real rnB = pointConstraint->r2.cross(normal);
          real nAngularIA = rnA * I;
          real nAngularIB = rnB * I;
          // Apply linear impulse
          m_constraintGroup.velocities[bodyIDA] -= constraint->invMassA * nLinearI;
          m_constraintGroup.velocities[bodyIDB] += constraint->invMassB * nLinearI;
          // Apply angular impulse
          m_constraintGroup.angularVelocities[bodyIDA] -= constraint->invMomentA * nAngularIA;
          m_constraintGroup.angularVelocities[bodyIDB] += constraint->invMomentB * nAngularIB;


    General Step to Solve Constraint

    In this article, we have learned how to solve contact penetration by defining it as a constraint and solve it. But this framework is not only used to solve contact penetration. We can do many more cool things with constraints like for example implementing hinge joint, pulley, spring, etc.

    So this is the step-by-step of constraint resolution:

    1. Define the constraint in the form \(\dot{C}: JV + b = 0\). is always \(\begin{bmatrix} \vec{v1} \\ w1 \\ \vec{v2} \\ w2\end{bmatrix}\) for every constraint. So we need to find or the Jacobian Matrix for that specific constraint.
    2. Decide the number of iteration for the sequential impulse.
    3. Next find the Lagrangian multiplier by inserting velocity, mass, and the Jacobian Matrix into this equation:
    4. Do step 3 for each constraint, and repeat the process as much as the number of iteration.
    5. Clamp the Lagrangian multiplier if needed.

    This marks the end of this article. Feel free to ask if something is still unclear. And please inform me if there are inaccuracies in my article. Thank you for reading.

    NB: Box2d use sequential impulse, but does not use baumgarte stabilization anymore. It uses full NGS to resolve the position drift. Chipmunk still use baumgarte stabilization.


    1. Allen Chou's post on Constraint Resolution
    2. A Unified Framework for Rigid Body Dynamics
    3. An Introduction to Physically Based Modeling: Constrained Dynamics
    4. Erin Catto's Box2d and presentation on constraint resolution

    Falton Debug Visualizer 18_01_2018 22_40_12.mp4



      Report Article

    User Feedback

    Some of your formulas are shown as text and not as formular, can you double check those?

    Share this comment

    Link to comment
    Share on other sites

    @Finalspace Sorry. I think there are bugs in the equation editor, and it made me really frustrated, so I kinda gave up back then. Now I already understand how to insert equation without the editor. So thank you for reminding me.

    Share this comment

    Link to comment
    Share on other sites

    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Advertisement
  • intellogo.png

    Are you ready to promote your game?

    Submit your game for Intel® certification by December 21, 2018 and you could win big! 

    Click here to learn more.

  • Latest Featured Articles

  • Featured Blogs

  • Advertisement
  • Popular Now

  • Similar Content

    • By _Nyu
      I'm trying to make a PBR vulkan renderer and I wanted to implement Spherical harmonics for the irradiance part (and maybe PRT in the future but that's another story).
      the evaluation on the shader side seems okay (it look good if I hardcode the SH directly in the shader) but when I try to generate it from a .hdr map it output only gray scale.
      It's been 3 days I'm trying to debug now I just have no clue why all my colour coefficients are gray.
      Here is the generation code:
      SH2 ProjectOntoSH9(const glm::vec3& dir) { SH2 sh; // Band 0 sh.coef0.x = 0.282095f; // Band 1 sh.coef1.x = 0.488603f * dir.y; sh.coef2.x = 0.488603f * dir.z; sh.coef3.x = 0.488603f * dir.x; // Band 2 sh.coef4.x = 1.092548f * dir.x * dir.y; sh.coef5.x = 1.092548f * dir.y * dir.z; sh.coef6.x = 0.315392f * (3.0f * dir.z * dir.z - 1.0f); sh.coef7.x = 1.092548f * dir.x * dir.z; sh.coef8.x = 0.546274f * (dir.x * dir.x - dir.y * dir.y); return sh; } SH2 ProjectOntoSH9Color(const glm::vec3& dir, const glm::vec3& color) { SH2 sh = ProjectOntoSH9(dir); SH2 shColor; shColor.coef0 = color * sh.coef0.x; shColor.coef1 = color * sh.coef1.x; shColor.coef2 = color * sh.coef2.x; shColor.coef3 = color * sh.coef3.x; shColor.coef4 = color * sh.coef4.x; shColor.coef5 = color * sh.coef5.x; shColor.coef6 = color * sh.coef6.x; shColor.coef7 = color * sh.coef7.x; shColor.coef8 = color * sh.coef8.x; return shColor; } void SHprojectHDRImage(const float* pixels, glm::ivec3 size, SH2& out) { double pixel_area = (2.0f * M_PI / size.x) * (M_PI / size.y); glm::vec3 color; float weightSum = 0.0f; for (unsigned int t = 0; t < size.y; t++) { float theta = M_PI * (t + 0.5f) / size.y; float weight = pixel_area * sin(theta); for (unsigned int p = 0; p < size.x; p++) { float phi = 2.0 * M_PI * (p + 0.5) / size.x; color = glm::make_vec3(&pixels[t * size.x + p]); glm::vec3 dir(sin(phi) * cos(theta), sin(phi) * sin(theta), cos(theta)); out += ProjectOntoSH9Color(dir, color) * weight; weightSum += weight; } } out.print(); out *= (4.0f * M_PI) / weightSum; }  
      outside of the SHProjectHDRImage function that's pretty much the code from MJP that you can check here:
      I'm not doing anything fancy in term of math or code but I that's my first time with those so I feel like I forgot something important.
      basically for every pixel on my equi-rectangular hdr map I generate a direction, get the colour and project it on the SH
      but strangely I endup with a SH looking like this:
      coef0: 1.42326 1.42326 1.42326
      coef1: -0.0727784 -0.0727848 -0.0727895
      coef2: -0.154357 -0.154357 -0.154356
      coef3: 0.0538229 0.0537928 0.0537615
      coef4: -0.0914876 -0.0914385 -0.0913899
      coef5: 0.0482638 0.0482385 0.0482151
      coef6: 0.0531449 0.0531443 0.0531443
      coef7: -0.134459 -0.134402 -0.134345
      coef8: -0.413949 -0.413989 -0.414021
      with the HDR map "Ditch River" from this web page http://www.hdrlabs.com/sibl/archive.html
      but I also get grayscale on the 6 other hdr maps I tried from hdr heaven, it's just different gray.
      If anyone have any clue that would be really welcome.
    • By pedro87
      I am trying to understand calculating normal vectors in my game engine book. I am having issues understanding a specific section about using gradient to calculate the normal vector. I have attached an image of the page, basically i am unsure where the gradient is coming into play, or just really anything thats going on, on the page. Any help would be appreciated. 

    • By sidbhati32
      Hey guys,
      I am getting an exception when I run my code. It's an asteroid game and I am using VS2012 express for it.
      When I run my code I get this warning dialogue box and then it takes me to UpdateBullet function.

    • By Mariusz Pilipczuk
      I'm trying to implement normal mapping. I've been following this: http://ogldev.atspace.co.uk/www/tutorial26/tutorial26.html
      The problem is that my tangent vectors appear rather obviously wrong. But only one of them, never both. Here's my code for calculating the tangents:
      this.makeTriangle = function(a, b, c) { var edge1 = VectorSub(b.pos, a.pos); var edge2 = VectorSub(c.pos, a.pos); var deltaU1 = b.texCoords[0] - a.texCoords[0]; var deltaV1 = b.texCoords[1] - a.texCoords[1]; var deltaU2 = c.texCoords[0] - a.texCoords[0]; var deltaV2 = c.texCoords[1] - a.texCoords[1]; var f = 1.0 / (deltaU1 * deltaV2 - deltaU2 * deltaV1); var vvec = VectorNormal([ f * (deltaV2 * edge1[0] - deltaV1 * edge2[0]), f * (deltaV2 * edge1[1] - deltaV1 * edge2[1]), f * (deltaV2 * edge1[2] - deltaV1 * edge2[2]), 0.0 ]); var uvec = VectorNormal([ f * (-deltaU2 * edge1[0] - deltaU1 * edge2[0]), f * (-deltaU2 * edge1[1] - deltaU1 * edge2[1]), f * (-deltaU2 * edge1[2] - deltaU1 * edge2[2]), 0.0 ]); if (VectorDot(VectorCross(a.normal, uvec), vvec) < 0.0) { uvec = VectorScale(uvec, -1.0); }; /* console.log("Normal: "); console.log(a.normal); console.log("UVec: "); console.log(uvec); console.log("VVec: "); console.log(vvec); */ this.emitVertex(a, uvec, vvec); this.emitVertex(b, uvec, vvec); this.emitVertex(c, uvec, vvec); }; My vertex shader:
      precision mediump float; uniform mat4 matProj; uniform mat4 matView; uniform mat4 matModel; in vec4 attrVertex; in vec2 attrTexCoords; in vec3 attrNormal; in vec3 attrUVec; in vec3 attrVVec; out vec2 fTexCoords; out vec4 fNormalCamera; out vec4 fWorldPos; out vec4 fWorldNormal; out vec4 fWorldUVec; out vec4 fWorldVVec; void main() { fTexCoords = attrTexCoords; fNormalCamera = matView * matModel * vec4(attrNormal, 0.0); vec3 uvec = attrUVec; vec3 vvec = attrVVec; fWorldPos = matModel * attrVertex; fWorldNormal = matModel * vec4(attrNormal, 0.0); fWorldUVec = matModel * vec4(uvec, 0.0); fWorldVVec = matModel * vec4(vvec, 0.0); gl_Position = matProj * matView * matModel * attrVertex; } And finally the fragment shader:
      precision mediump float; uniform sampler2D texImage; uniform sampler2D texNormal; uniform float sunFactor; uniform mat4 matView; in vec2 fTexCoords; in vec4 fNormalCamera; in vec4 fWorldPos; in vec4 fWorldNormal; in vec4 fWorldUVec; in vec4 fWorldVVec; out vec4 outColor; vec4 calcPointLight(in vec4 normal, in vec4 source, in vec4 color, in float intensity) { vec4 lightVec = source - fWorldPos; float sqdist = dot(lightVec, lightVec); vec4 lightDir = normalize(lightVec); return color * dot(normal, lightDir) * (1.0 / sqdist) * intensity; } vec4 calcLights(vec4 pNormal) { vec4 result = vec4(0.0, 0.0, 0.0, 0.0); ${CALC_LIGHTS} return result; } void main() { vec4 surfNormal = vec4(cross(vec3(fWorldUVec), vec3(fWorldVVec)), 0.0); vec2 bumpCoords = fTexCoords; vec4 bumpNormal = texture(texNormal, bumpCoords); bumpNormal = (2.0 * bumpNormal - vec4(1.0, 1.0, 1.0, 0.0)) * vec4(1.0, 1.0, 1.0, 1.0); bumpNormal.w = 0.0; mat4 bumpMat = mat4(fWorldUVec, fWorldVVec, fWorldNormal, vec4(0.0, 0.0, 0.0, 1.0)); vec4 realNormal = normalize(bumpMat * bumpNormal); vec4 realCameraNormal = matView * realNormal; float intensitySun = clamp(dot(normalize(realCameraNormal.xyz), normalize(vec3(0.0, 0.0, 1.0))), 0.0, 1.0) * sunFactor; float intensity = clamp(intensitySun + 0.2, 0.0, 1.0); outColor = texture(texImage, fTexCoords) * (vec4(intensity, intensity, intensity, 1.0) + calcLights(realNormal)); //outColor = texture(texNormal, fTexCoords); //outColor = 0.5 * (fWorldUVec + vec4(1.0, 1.0, 1.0, 0.0)); //outColor = vec4(fTexCoords, 1.0, 1.0); outColor.w = 1.0; } Here is the result of rendering an object, showing its normal render, the uvec, vvec, and texture coordinates (each commented out in the fragment shader code):

      Normal map itself:

      The uvec, as far as I can tell, should not be all over the place like it is; either this, or some other mistake, causes the normal vectors to be all wrong, so you can see on the normal render that for example there is a random dent on the left side which should not be there. As far as I can tell, my code follows the math from that tutorial. I use right-handed corodinates.
      So what could be wrong?
    • By taxfromdk
      I need a measure for how close a collection of points is to forming a straight line in 2d.
      Currently I use Principle Component Analysis in 2d and take the mean of the of square on the second axis.
      It works fine, but I feel it is overkill. 
      I would like something simpler I could express in a li rary like Tensorflow.
      Is there a simpler way?

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!