Here is a good blog which covers a lot of the transitions:
http://blogs.msdn.com/b/chuckw/
Banner advertising on our site currently available from just $5!
1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!
Posted by Dirk Gregorius on 25 April 2015 - 10:34 AM
Posted by Dirk Gregorius on 25 April 2015 - 10:27 AM
This looks correct! BAUMGARTE should be 0.1 - 0.2.
Here are some more debug tips:
- Disable penetration recovery (maybe you compute the penetration wrong). Just set BAUMGARTE to zero. Simple test.
- Color your contact points. E.g. green for points you were able to wamstart and red for points that you cannot match to old points. If you have a blinking 'Christmas' tree you know that this is the problem
How do you handle more than four points? Check that your reduction code is working. You need to project all contact points onto the reference plane for culling.
Again, I think it is very likely some simple mistake and not a fundamental flaw in your math or implementation. I say this, because I always think that and then it is often just a stupid typo.
I like that you solve the friction constraints before non-penetrations since the friction constraints can violate the non-penetration. Nevertheless, you get better results from my experience the other way around. Until you don't have CCD and continuous physics you don't need to solve friction first.
Posted by Dirk Gregorius on 21 April 2015 - 12:59 PM
Yeah, you cannot just check the edges of the minimizing faces. I see people do this, but it is wrong. Also just checking the leaving edges from the vertices of the minimizing faces is not enough. There are configurations where this can fail as well.
Again, for performance the trick is to *not* call the SAT at all and utilize temporal coherence. If you find a separating axis check it in the next frame first. Chances are high it is still a separating axis. If you find two touching features inspect the relative orientation in the next frame and try to build a contact directly from these features if the relative configuration didn't change within some reasonable tolerance.
Posted by Dirk Gregorius on 17 April 2015 - 07:00 PM
Yup, the only remaining problem is now that you favor face over edge contacts and avoid feature flip-flops between faces. I don't know what bIsFaceContactA and bIsFaceContactB are doing, but a simple way to enforce this would be something along these lines (assuming MKS for the absolute tolerance):
const float kLinearSlop = 0.005f; const float kRelEdgeTolerance = 0.90f; const float kRelFaceTolerance = 0.98f; const float kAbsTolerance = 0.5f * kLinearSlop; if ( EdgeQuery.Separation > kRelEdgeTolerance * rnMax( FaceQuery1.Separation, FaceQuery2.Separation ) + kAbsTolerance ) { rnBuildEdgeContact( Out, Transform1, Hull1, Transform2, Hull2, EdgeQuery ); } else { if ( FaceQuery2.Separation > kRelFaceTolerance * FaceQuery1.Separation + kAbsTolerance ) { // Face contact (2) bool FlipNormal = true; rnBuildFaceContact( Out, Transform2, Hull2, Transform1, Hull1, FaceQuery2, FlipNormal ); } else { // Face contact (1) bool FlipNormal = false; rnBuildFaceContact( Out, Transform1, Hull1, Transform2, Hull2, FaceQuery1, FlipNormal ); } }
For the logic part here remember that at this point all separations are negative (since we didn't find a separating axis). So greater means closer to zero and max( s1, s2 ) returns the one closer to zero as well.
Posted by Dirk Gregorius on 17 April 2015 - 05:37 PM
The side planes normals are pointing outward for me. It really only depends on what your clipping code expects. The clipping code I posted expects outward pointing normals.
Posted by Dirk Gregorius on 17 April 2015 - 11:21 AM
#1:
A plane is defined by a normal and a point on the plane. The plane equation has the form: n*x - n*p = n*x - d = 0. You can now plug in any point into the plane equation to qualify whether the point is in front of, behind or on the plane. If the normal has unit length (called Hesse-Form) the result is the signed distance. So yes, the distance is signed, but it is simply computed like this:
Separation = Dot( Normal, Support ) - PlaneOffset // Where PlaneOffset is the d above or m_fDist in your code.
Make sure that the plane offset has the correct sign. Sometimes people store a plane as: a*x + b*y + b*z + d = 0 to make it more more SIMD friendly. in that case d = -n*p (note the minus!)
#2:
These are two different things if I understand you correctly. For the separating axis you don't need to do anything special, but for the contact normal you might need to flip it if it comes from object 2 as by definition we assume that it points from object 1 to object 2.
I hope I understand your question correctly, otherwise provide a simple example what your problem is.
#3:
It is difficult to check your code since I cannot step through it. 'Real-Time Collision Detection' has a correct implementation of the algorithm which is pretty easy to understand. I copy&paste my implementation here. Maybe you can compare it and find your mistake yourself.
A nice thing about collision detection is that you can visualize everything. So in your case I would draw the face normals and make sure that always point away from the cube. Make sure all edge directions are correct and build a CW or CCW loop around each face. Finally draw the side plane normals and make sure they point to the outside as well. After you have checked that all data is correct I would start investigating your clipping implementation. Otherwise you are just chasing waterfalls.
//-------------------------------------------------------------------------------------------------- void rnClipPolygon( rnPolygon& Out, const rnPolygon& Polygon, const rnPlane& Plane ) { RN_ASSERT( Out.Empty() ); RN_ASSERT( Polygon.Size() >= 3 ); rnVertex Vertex1 = Polygon.Back(); float Distance1 = rnDistance( Plane, Vertex1.Position ); for ( int Index = 0; Index < Polygon.Size(); ++Index ) { rnVertex Vertex2 = Polygon[ Index ]; float Distance2 = rnDistance( Plane, Vertex2.Position ); if ( Distance1 <= 0.0f && Distance2 <= 0.0f ) { // Both vertices are behind the plane - keep vertex2 Out.PushBack( Vertex2 ); } else if ( Distance1 <= 0.0f && Distance2 > 0.0f ) { // Vertex1 is behind of the plane, vertex2 is in front -> intersection point float Fraction = Distance1 / ( Distance1 - Distance2 ); rnVector3 Position = Vertex1.Position + Fraction * ( Vertex2.Position - Vertex1.Position ); // Keep intersection point rnVertex Vertex; Vertex.Position = Position; Out.PushBack( Vertex ); } else if ( Distance2 <= 0.0f && Distance1 > 0 ) { // Vertex2 is behind of the plane, vertex1 is in front -> intersection point float Fraction = Distance1 / ( Distance1 - Distance2 ); rnVector3 Position = Vertex1.Position + Fraction * ( Vertex2.Position - Vertex1.Position ); // Keep intersection point rnVertex Vertex; Vertex.Position = Position; Out.PushBack( Vertex ); // And also keep vertex2 Out.PushBack( Vertex2 ); } // Keep vertex2 as starting vertex for next edge Vertex1 = Vertex2; Distance1 = Distance2; } }
Posted by Dirk Gregorius on 17 April 2015 - 11:01 AM
Well, torque is just the change of angular momentum. The tensor transform stays the same. Assuming column vector you transform the local inertia like this:
I_world = R * I_local * RT
If you use row vectors it works like this:
I_world = RT * I_local * RT
My guess is that you might *not* have transformed all your equations to proper row vector form. I know row vectors are very popular in graphics, but for physics it is an easy way to shoot yourself into the foot.
Posted by Dirk Gregorius on 16 April 2015 - 10:39 AM
Well, the embedding is correct. You can visualize this using parentheses when computing the world space angular momentum:
L = I * omega = R * I' * RT * omega = R * ( I' * ( RT * omega ) )
- First you basically transform the world space angular velocity to body space: RT * omega
- Then you apply the body space inertia (hence the prime to indicate body space): I' * ( RT * omega )
- Finally you transform everything back to word space to find get the world space angular momentum: R * ( I' * ( RT * omega ) )
If you use row vectors you have to revert the whole equation. E.g.
L = omega * I = omega * RT * I' * R => I = RT * I' * R
HTH,
-Dirk
Posted by Dirk Gregorius on 15 April 2015 - 01:01 PM
- You compute the vertices, local face planes and topology during pre-processing. You then compute the edge cross products at runtime.
- Edges are just indices into the vertex array.
HTH, I am not sure if I understand what the question is.
Quickhull and HE are indeed a very good solution for this problem. But the one I showed here is fine as well and easy to construct for e.g. boxes. The half-edge structure becomes really useful if you start pruning edge pairs that don't build a Minkowski face. But this is an optimization and I would handle this at the end when everything is working correctly.
Posted by Dirk Gregorius on 14 April 2015 - 07:32 PM
You mean A to B, B to C, C to D and then X to A I assume. Yes, that is correct, but all you need is an integer array for the edges and then store an offset into that array per face. At the first index you store the number of vertices for that face and the following entries contain the vertex indices for that face. In your example above the top face followed by the front face would be indexed like this { 4, 1, 2, 6, 5, 4, 0, 3, 2, 1, ... }
Then in code you have an array of face planes and a parallel array of offsets into the topology list. In code then this could look something like this:
for ( all faces i ) { int offset = mTopologyOffsets[ i ]; int vertexCount = mFaceTopology[ offset ]; for ( all vertices k = 1 as long as k <= vertexCount ) { int vertexIndex = mFaces[ offset + k ] Vector3 vertex = mFaceTopology[ vertexIndex ]; } }
This is easy and can get you going for a simple box and convex polyhedra.
Posted by Dirk Gregorius on 14 April 2015 - 02:12 PM
I'm skipping the edge-edge test if the cross product between them is small than 0.00001f.
If you don't normalize the edge vectors you need to take their length into account. Otherwise this would just be an absolute tolerance. Here is a good discussion how to use relative and absolute tolerances together: http://realtimecollisiondetection.net/pubs/Tolerances/
Now (little off-topic), I'm facing with the problem of detecting the side planes of the reference face.
You don't clip against the neighboring planes. For every edge of the reference face you build a 'side' plane with the reference face normal. E.g. say you have a plane class which takes a normal and point on the plane:
// Build plane (v are the vertices and n is the normal of the reference face)
RnPlane ClipPlane( RnCross( v[ 2 ] - v[ 1 ], n ), v[ 1 ] );
ClipPlane.Normalize();
// Clip (e.g. using Sutherland-Hodgeman)
aPolyhedron.Clip( ClipPlane );
The incident face is just the face of the second object that is most on the negative direction of the reference face right?
Yes, essentially the most anti-parallel face! In practice just the face with the smallest (negative) dot product with the reference face normal.
Do you have any recommendation?
Did you see my presentations on the SAT? You can have a look at my 2013 and 2015 presentations here:
http://www.valvesoftware.com/company/publications.html
Posted by Dirk Gregorius on 14 April 2015 - 11:29 AM
You also want to check for parallel edges in the edge test. A random normal can generate a false separation or penetration.
E.g. you could use: |e1 x e2| = |e1| * |e2| * sin( alpha ) *and* sin( alpha ) ~= alpha for alpha << 1
Posted by Dirk Gregorius on 14 April 2015 - 11:21 AM
Not sure if this helps, but here is a really nice tutorial on point arithmetic (it starts about chapter 7):
http://www.flipcode.com/archives/3D_Geometry_Primer_Chapter_1-Introduction.shtml
Posted by Dirk Gregorius on 10 February 2015 - 06:33 PM
Here is a good discussion of the different position corrections:
https://code.google.com/p/box2d/source/browse/trunk/Box2D/Box2D/Dynamics/b2Island.cpp
Posted by Dirk Gregorius on 25 January 2015 - 12:29 PM
You want to pass vRa and vRb into your ApplyForce() function and not vContactPoint. Also make sure that prbLeft->m_vPos is the location of the center of mass and not the position of the rigid body. The way artists author models they are usually different. Don't assume that the position and center of mass are coincident.
I also recommend to rename ApplyForce() to ApplyImpulse().