GJK Caseys way + closest pt + illustrations.

Started by
11 comments, last by bzroom 12 years, 5 months ago
I watched Casey's video and the following morning i wrote the test completely from memory, making illustrations along the way.
Casey's Video
Illustration image

The boolean intersection test passed all of my unit and random real time tests, though they were not exaustive.

I've tried to add closest pt results and I'm having some trouble.

The test ends with a triangle simplex. The separation vector is plausible but the pt's are a little off.
Closest Pt test result image. (closest pt found to be in the center of these two "wrong" triangles)

Intuitively i would have thought he blue triangle would be on the top face, and the red triangle on the bottom face.
I believe this is due to Casey's algorithm quick rejecting when ever possible and not exhaustively searching the support.

Has anyone encountered this and can describe the solution?
Do i just need to check all/more of the voronoi regions?

Source code: I hope it is easy to follow.

#ifndef __tGJK__
#define __tGJK__

#include "tSupportMapping.hpp"

namespace Sig { namespace Physics
{

struct tSimplexVertex
{
Math::tVec3f mA;
Math::tVec3f mB;
Math::tVec3f mMinkowski;

tSimplexVertex( )
{ }

tSimplexVertex( const Math::tVec3f& a, const Math::tVec3f& b, const Math::tVec3f& minkowski )
: mA( a )
, mB( b )
, mMinkowski( minkowski )
{ }
};

struct tSimplex
{
typedef tFixedGrowingArray< tSimplexVertex, 4 > tVertList;
tVertList mVerts;
Math::tVec3f mSearchDir;

void fClear( ) { mVerts.fSetCount( 0 ); }
};

class tGJK
{
public:
tGJK( tSupportMapping* a, tSupportMapping* b );

const tSupportMappingPtr& fA( ) const { return mA; }
const tSupportMappingPtr& fB( ) const { return mB; }
void fReset( ) { fInitialize( ); }

// keep calling this to keep the information up to date as objects move.
void fUpdate( );

// Feedback
b32 fIntersects( ) const { return mIntersects; }
const Math::tVec3f& fClosestPtA( ) const { return mClosestPtA; }
const Math::tVec3f& fClosestPtB( ) const { return mClosestPtB; }

// only for debugging.
static void fTest( );
void fDrawSimplexes( );
private:
tSupportMappingPtr mA;
tSupportMappingPtr mB;

tSimplex mSimplex;
b32 mIntersects;
Math::tVec3f mClosestPtA;
Math::tVec3f mClosestPtB;

void fInitialize( );

tSimplexVertex fSupport( const Math::tVec3f& worldD );

// return false if origin is within simplex.
b32 fUpdateSimplex( const tSimplexVertex& newVert );

// a final step to compute the actual closest pts on the shapes.
void fUpdateClosestPts( );

};

}}

#endif//__tGJK__



#include "BasePch.hpp"
#include "tGJK.hpp"

#include "tPhysicsWorld.hpp" //for debug rendering only

using namespace Sig::Math;

namespace Sig { namespace Physics
{

// -----------------------------------------------------
// See tGJK_Figures.png for illustrations of how this work.
// -----------------------------------------------------

namespace
{
b32 fSameDirection( const tVec3f& a, const tVec3f& b )
{
return a.fDot( b ) > 0.f;
}

// UNNORMALIZED
tVec3f fPlaneNormal( const tVec3f& a, const tVec3f& b, const tVec3f& c )
{
return (b-a).fCross( c-a );
}

tPlanef fUnnormalizedPlane( const tVec3f& a, const tVec3f& b, const tVec3f& c )
{
tVec3f normal = fPlaneNormal( a, b, c );
return tPlanef( normal, a );
}

b32 fOriginOutsidePlane( const tPlanef& plane )
{
return plane.d > 0.f;
}

tVec3f fEdgeToOriginDir( const tVec3f& ab, const tVec3f& ao )
{
return ab.fCross( ao.fCross( ab ) );
}
}

tGJK::tGJK( tSupportMapping* a, tSupportMapping* b )
: mA( a )
, mB( b )
, mIntersects( false )
, mClosestPtA( tVec3f::cZeroVector )
, mClosestPtB( tVec3f::cZeroVector )
{
fInitialize( );
}

void tGJK::fInitialize( )
{
sigassert( mA && mB );

mSimplex.fClear( );

tVec3f firstDir = tVec3f::cYAxis;
mSimplex.mVerts.fPushBack( fSupport( firstDir ) );

// first direction is towards the origin.
mSimplex.mSearchDir = -mSimplex.mVerts[ 0 ].mMinkowski;
}

tSimplexVertex tGJK::fSupport( const Math::tVec3f& worldD )
{
tVec3f a = mA->fWorldSupport( worldD );
tVec3f b = mB->fWorldSupport( -worldD );
return tSimplexVertex( a, b, a - b );
}

void tGJK::fUpdate( )
{
sigassert( mSimplex.mVerts.fCount( ) );

// clamp iterations
for( u32 i = 0; i < 2000; ++i )
//while( 1 )
{
tSimplexVertex newPt = fSupport( mSimplex.mSearchDir );


//if( mSimplex.mVerts.fBack( ).mMinkowski.fLengthSquared( ) > newPt.mMinkowski.fLengthSquared( ) )
if( fSameDirection( newPt.mMinkowski, mSimplex.mSearchDir ) )
{
// new pt is useful. see figure 0
if( !fUpdateSimplex( newPt ) )
{
mIntersects = true;
break; //origin is contained, intersection
}
}
else
{
// no point lies beyond origin., fig 0.
mIntersects = false;
break;
}
}

fUpdateClosestPts( );
}

b32 tGJK::fUpdateSimplex( const tSimplexVertex& newVert )
{
tSimplex::tVertList& verts = mSimplex.mVerts;
sigassert( verts.fCount( ) && "simplex needs to be intialized!" );

verts.fPushBack( newVert );

switch( verts.fCount( ) )
{
case 2:
{
// Line case, a is the newer pt.
const tVec3f& a = verts[ 1 ].mMinkowski;
const tVec3f& b = verts[ 0 ].mMinkowski;
tVec3f ab = b - a;
tVec3f ao = -a; //a to origin.

if( fSameDirection( ab, ao ) )
{
// origin is along segment. fig 1.
mSimplex.mSearchDir = fEdgeToOriginDir( ab, ao );
}
else
{
// origin is outside of segment, fig 1.
verts.fErase( 0 );
mSimplex.mSearchDir = ao; //search to origin.
}

break;
}
case 3:
{
// triangle case, a is the newer pt.
// see fig. 2
const tVec3f& a = verts[ 2 ].mMinkowski;
const tVec3f& b = verts[ 1 ].mMinkowski;
const tVec3f& c = verts[ 0 ].mMinkowski;
tVec3f ab = b - a;
tVec3f ac = c - a;
tVec3f ao = -a;

if( !fSameDirection( ab, ao ) )
{
// origin is outside of ab
if( !fSameDirection( ac, ao ) )
{
// origin is outside of ac
// origin is closest to a.
mSimplex.mSearchDir = ao;

// remove all features except a
verts[ 0 ] = verts[ 2 ];
verts.fSetCount( 1 );
}
else
{
//origin is closest to ac
mSimplex.mSearchDir = fEdgeToOriginDir( ac, ao );

//throw out b
verts.fEraseOrdered( 1 );
}
}
else //origin is along ab
{
if( fSameDirection( ac, ao ) )
{
// origin is inside of ac also
// so origin is closest to the face.
tPlanef facePlane = fUnnormalizedPlane( a, b, c );

if( fOriginOutsidePlane( facePlane ) )
mSimplex.mSearchDir = facePlane.fGetNormal( );
else
{
mSimplex.mSearchDir = -facePlane.fGetNormal( );

//rewind face in other direction
fSwap( verts[ 0 ], verts[ 1 ] );
}
}
else
{
//origin is closest to ab segment
mSimplex.mSearchDir = fEdgeToOriginDir( ab, ao );

//throw out c
verts.fEraseOrdered( 0 );
}
}
break;
}
case 4:
{
// tetrahedron case, a is the newer pt.
// see fig. 3
const tVec3f& a = verts[ 3 ].mMinkowski;
const tVec3f& b = verts[ 2 ].mMinkowski;
const tVec3f& c = verts[ 1 ].mMinkowski;
const tVec3f& d = verts[ 0 ].mMinkowski;
const tVec3f ao = -a;

tPlanef p1 = fUnnormalizedPlane( a, b, c );
tPlanef p2 = fUnnormalizedPlane( a, d, b );
tPlanef p3 = fUnnormalizedPlane( a, c, d );
b32 out1 = fOriginOutsidePlane( p1 );
b32 out2 = fOriginOutsidePlane( p2 );
b32 out3 = fOriginOutsidePlane( p3 );

if( out1 )
{
if( out2 )
{
if( out3 )
{
//origin is closest to a.
mSimplex.mSearchDir = ao;

//throw out all but a
verts[ 0 ] = verts[ 3 ];
verts.fSetCount( 1 );
}
else
{
//origin is along edge ab.
mSimplex.mSearchDir = fEdgeToOriginDir( b-a, ao );

//throw out c and d
verts[ 0 ] = verts[ 2 ];
verts[ 1 ] = verts[ 3 ];
verts.fSetCount( 2 );
}
}
else if( out3 )
{
//origin is along edge ac.
mSimplex.mSearchDir = fEdgeToOriginDir( c-a, ao );

//throw out d and b
verts[ 0 ] = verts[ 1 ];
verts[ 1 ] = verts[ 3 ];
verts.fSetCount( 2 );
}
else
{
//origin is outside p1 only
mSimplex.mSearchDir = p1.fGetNormal( );

//throw out d
verts.fEraseOrdered( 0 );
}
}
else if( out2 )
{
if( out3 )
{
//origin is along ad
mSimplex.mSearchDir = fEdgeToOriginDir( d-a, ao );

//throw out c and b
verts[ 1 ] = verts[ 3 ];
verts.fSetCount( 2 );
}
else
{
//origin is outside p2 only
mSimplex.mSearchDir = p2.fGetNormal( );

//throw out d
verts.fEraseOrdered( 1 );
}
}
else if( out3 )
{
//origin is outside p3 only
mSimplex.mSearchDir = p3.fGetNormal( );

//throw out b
verts.fEraseOrdered( 2 );
}
else
{
//origin is within all 3 planes, contained, intersection.
return false;
}

break;
}
default:
{
sigassert( !"Should not have gotten here." );
}
}
return true;
}

void tGJK::fUpdateClosestPts( )
{
tSimplex::tVertList& verts = mSimplex.mVerts;
sigassert( verts.fCount( ) && "simplex needs to be intialized!" );

switch( verts.fCount( ) )
{
case 1:
{
const tSimplexVertex& a = verts[ 0 ];

mClosestPtA = a.mA;
mClosestPtB = a.mB;

break;
}
case 2:
{
// Line case, a is the newer pt.
const tSimplexVertex& a = verts[ 1 ];
const tSimplexVertex& b = verts[ 0 ];
tVec3f ab = b.mMinkowski - a.mMinkowski;
tVec3f ao = -a.mMinkowski; //a to origin.

f32 abLen;
ab.fNormalize( abLen );

f32 bCoef = ab.fDot( ao ) / abLen;
f32 aCoef = 1.f - bCoef;

sigassert( fInBounds( aCoef, 0.f, 1.f ) );
sigassert( fInBounds( bCoef, 0.f, 1.f ) );

mClosestPtA = a.mA * aCoef + b.mA * bCoef;
mClosestPtB = a.mB * aCoef + b.mB * bCoef;

break;
}
case 3:
{
// triangle case, a is the newer pt.
// see fig. 2
const tSimplexVertex& a = verts[ 2 ];
const tSimplexVertex& b = verts[ 1 ];
const tSimplexVertex& c = verts[ 0 ];
tVec3f ab = b.mMinkowski - a.mMinkowski;
tVec3f ac = c.mMinkowski - a.mMinkowski;
tVec3f ao = -a.mMinkowski;

f32 abLen;
f32 acLen;
ab.fNormalize( abLen );
ac.fNormalize( acLen );

f32 bCoef = ab.fDot( ao ) / abLen;
f32 cCoef = ac.fDot( ao ) / acLen;
f32 aCoef = 1.f - bCoef - cCoef;

sigassert( fInBounds( aCoef, 0.f, 1.f ) );
sigassert( fInBounds( bCoef, 0.f, 1.f ) );
sigassert( fInBounds( cCoef, 0.f, 1.f ) );

mClosestPtA = a.mA * aCoef + b.mA * bCoef + c.mA * cCoef;
mClosestPtB = a.mB * aCoef + b.mB * bCoef + c.mB * cCoef;

break;
}
case 4:
{
// tetrahedron case, a is the newer pt.
// see fig. 3
const tSimplexVertex& a = verts[ 3 ];
/*const tSimplexVertex& b = verts[ 2 ];
const tSimplexVertex& c = verts[ 1 ];
const tSimplexVertex& d = verts[ 0 ];*/
//const tVec3f ao = -a;

// have not attempted this case yet.
mClosestPtA = a.mA;
mClosestPtB = a.mB;

break;
}
default:
{
sigassert( !"Should not have gotten here." );
}
}
}


void tGJK::fDrawSimplexes( )
{
u32 count = mSimplex.mVerts.fCount( );
if( count > 1 )
{
for( u32 i = 0; i < count - 1; ++i )
{
tPhysicsWorld::fDebugGeometry( ).fRenderOnce( mSimplex.mVerts[ i ].mA, mSimplex.mVerts[ i+1

].mA, tVec4f( 1,0,0,1 ) );
tPhysicsWorld::fDebugGeometry( ).fRenderOnce( mSimplex.mVerts[ i ].mB, mSimplex.mVerts[ i+1

].mB, tVec4f( 0,0,1,1 ) );
}
tPhysicsWorld::fDebugGeometry( ).fRenderOnce( mSimplex.mVerts[ 0 ].mA, mSimplex.mVerts[ count-1 ].mA,

tVec4f( 1,0,0,1 ) );
tPhysicsWorld::fDebugGeometry( ).fRenderOnce( mSimplex.mVerts[ 0 ].mB, mSimplex.mVerts[ count-1 ].mB,

tVec4f( 0,0,1,1 ) );
}
}


} }
Advertisement
For closest points you need a different termination condition. This is discussed in the Mollyrocket forums. Personally I didn't get away with Casey's optimizations. I had the other regions tested as well and assert if I hit a wrong region. For closest points I hit these asserts frequently so what I ended up with was to test Caseys regions first and then the rest. This has the same efficiency but is robust. The boolean query worked, but in some rare cases I also hit my asserts. Make sure you test overlapping spheres where the centers are identical. In this case the origin is already contained in a segment rather than a tetrahedron.

Personally I found that GJK doesn't work reliable for quadric shapes (e.g cones or cylinders) in floating point precision. This is why I only use it for polyhedral shapes. Spheres and capsules are treated as points and segments and the radius is added later.

In my opinion the best presentation about GJK is by Erin Catto. He also deals with common cases in games like linear dependent vertices (e.g. three points on an edge). You can still use Casey's optimization by ordering your tests accordingly. This is what I do.

GDC 2010: Computing distance with GJK
http://code.google.com/p/box2d/downloads/list


HTH,
-Dirk
Thank you so very much. And thank you Casey and thank you Erin.
Thanks to Erin's slides this is working very well for non-penetration cases.

Here's the god awful test i came up with.
Any suggestions on how to do the tetrahedron case better?

I attempted to use the tetrahedron barycentric coordinates, based on the fractional area.
I wasn't sure if that properly defined the voronoi regions.


void tGJK::fEvolveSimplex( )
{
tSimplex::tVertList& verts = mSimplex.mVerts;
sigassert( verts.fCount( ) > 1 && "simplex needs to be intialized!" );

switch( verts.fCount( ) )
{
case 2:
{
// Line case, a is the newer pt.
const tSimplexVertex& a = verts[ 1 ];
const tSimplexVertex& b = verts[ 0 ];

tVec2f abBary = fBarycentricEdge( a.mMinkowski, b.mMinkowski );

if( abBary.x > 0 )
{
if( abBary.y > 0 )
{
edge_simplex( a, b, abBary )
}
else
{
// a is closest
vert_simplex( a );
}
}
else
{
// b is closest
vert_simplex( b );
}

break;
}
case 3:
{
// triangle case, a is the newer pt.
// see fig. 2
const tSimplexVertex& a = verts[ 2 ];
const tSimplexVertex& b = verts[ 1 ];
const tSimplexVertex& c = verts[ 0 ];

//sigassert( fUnsignedTriArea( a.mMinkowski, b.mMinkowski, c.mMinkowski ) > 0.0001f );

tVec2f abBary = fBarycentricEdge( a.mMinkowski, b.mMinkowski );
tVec2f bcBary = fBarycentricEdge( b.mMinkowski, c.mMinkowski );
tVec2f caBary = fBarycentricEdge( c.mMinkowski, a.mMinkowski );

if( abBary.y <= 0 && caBary.x <= 0 )
{
//vertex a is closest
vert_simplex( a );
}
else if( abBary.x <= 0 && bcBary.y <= 0 )
{
//vertex b is closest
vert_simplex( b );
}
else if( bcBary.x <= 0 && caBary.y <= 0 )
{
//vertex c is closest
vert_simplex( c );
}
else
{
//edge cases
tVec3f uvw = fBarycentricTri( a.mMinkowski, b.mMinkowski, c.mMinkowski );

if( uvw.z <= 0 && abBary.x > 0 && abBary.y > 0 )
{
//ab is closest
edge_simplex( a, b, abBary );
}
else if( uvw.y <= 0 && caBary.x > 0 && caBary.y > 0 )
{
//ca is closest
edge_simplex( c, a, caBary );
}
else if( uvw.x <= 0 && bcBary.x > 0 && bcBary.y > 0 )
{
//bc is closest
edge_simplex( b, c, bcBary );
}
else
{
//origin closest to face
sigassert( uvw.x > 0 && uvw.y > 0 && uvw.z > 0 );
tPlanef facePlane = fUnnormalizedPlane( a.mMinkowski, b.mMinkowski, c.mMinkowski );
plane_simplex( facePlane, -1 ); //no vertex to erase
}
}
break;
}
case 4:
{
// tetrahedron case, a is the newer pt.
// see fig. 3
tSimplexVertex& a = verts[ 3 ];
tSimplexVertex& b = verts[ 2 ];
tSimplexVertex& c = verts[ 1 ];
tSimplexVertex& d = verts[ 0 ];

tPlanef test = fUnnormalizedPlane( d.mMinkowski, c.mMinkowski, b.mMinkowski );
if( test.fSignedDistance( a.mMinkowski ) > 0.f )
{
//tetrahedron is inside out.
sigassert( fSignedTetraArea( a.mMinkowski, b.mMinkowski, c.mMinkowski, d.mMinkowski ) <= 0 );
fSwap( c, d );
}

tPlanef p1 = fUnnormalizedPlane( a.mMinkowski, b.mMinkowski, c.mMinkowski );
tPlanef p2 = fUnnormalizedPlane( a.mMinkowski, d.mMinkowski, b.mMinkowski );
tPlanef p3 = fUnnormalizedPlane( a.mMinkowski, c.mMinkowski, d.mMinkowski );
tPlanef p4 = fUnnormalizedPlane( d.mMinkowski, c.mMinkowski, b.mMinkowski );
b32 out1 = fOriginOutsidePlane( p1 );
b32 out2 = fOriginOutsidePlane( p2 );
b32 out3 = fOriginOutsidePlane( p3 );
b32 out4 = fOriginOutsidePlane( p4 );

//sigassert( p4.fSignedDistance( a ) < 0.00001f );
//sigassert( fSignedTetraArea( a, b, c, d ) >= 0 );

sigassert( (!out1 || !out2 || !out3 || !out4) && "Not possible. hedron inside out!" );

if( !out1 && !out2 && !out3 && !out4 )
{
//origin is within all 4 planes, contained, intersection.
mIntersects = true;
mTerminate = true;
}
else
{
tVec2f abBary = fBarycentricEdge( a.mMinkowski, b.mMinkowski );
tVec2f acBary = fBarycentricEdge( a.mMinkowski, c.mMinkowski );
tVec2f adBary = fBarycentricEdge( a.mMinkowski, d.mMinkowski );
tVec2f cdBary = fBarycentricEdge( c.mMinkowski, d.mMinkowski );
tVec2f cbBary = fBarycentricEdge( c.mMinkowski, b.mMinkowski );
tVec2f dbBary = fBarycentricEdge( d.mMinkowski, b.mMinkowski );

if( abBary.y <= 0 && acBary.y <= 0 && adBary.y <= 0 )
{
// a closest
vert_simplex( a );
}
else if( abBary.x <= 0 && dbBary.x && cbBary.x <= 0 )
{
// b closest
vert_simplex( b );
}
else if( cdBary.y <= 0 && acBary.x && cbBary.y <= 0 )
{
// c closest
vert_simplex( c );
}
else if( cdBary.x <= 0 && dbBary.y && adBary.x <= 0 )
{
// d closest
vert_simplex( d );
}
else
{
//edge cases
tVec3f abc = fBarycentricTri( a.mMinkowski, b.mMinkowski, c.mMinkowski );
tVec3f adb = fBarycentricTri( a.mMinkowski, d.mMinkowski, b.mMinkowski );
tVec3f acd = fBarycentricTri( a.mMinkowski, c.mMinkowski, d.mMinkowski );
tVec3f dcb = fBarycentricTri( d.mMinkowski, c.mMinkowski, b.mMinkowski );

//tVec4f tetra = fBarycentricTetra( a,b,c,d );

const f32 cEps = 0.0f;

if( abBary.x > 0 && abBary.y > 0 && abc.z <= cEps && adb.y <= cEps )
//if( tetra.z <= 0 && tetra.w <= 0 )
{
//ab closest
edge_simplex( a, b, abBary );
}
else if( acBary.x > 0 && acBary.y > 0 && abc.y <= cEps && acd.z <= cEps )
//else if( tetra.y <= 0 && tetra.w <= 0 )
{
//ac closest
edge_simplex( a, c, acBary );
}
else if( adBary.x > 0 && adBary.y > 0 && acd.y <= cEps && adb.z <= cEps )
//else if( tetra.y <= 0 && tetra.z <= 0 )
{
//ad closest
edge_simplex( a, d, adBary );
}
else if( cbBary.x > 0 && cbBary.y > 0 && abc.x <= cEps && dcb.x <= cEps )
//else if( tetra.x <= 0 && tetra.w <= 0 )
{
//cb closest
edge_simplex( c, b, cbBary );
}
else if( cdBary.x > 0 && cdBary.y > 0 && acd.x <= cEps && dcb.z <= cEps )
//else if( tetra.x <= 0 && tetra.y <= 0 )
{
//cd closest
edge_simplex( c, d, cdBary );
}
else if( dbBary.x > 0 && dbBary.y > 0 && adb.x <= cEps && dcb.y <= cEps )
//else if( tetra.x <= 0 && tetra.z <= 0 )
{
//db closest
edge_simplex( d, b, dbBary );
}
else
{
//face cases
if( out1 && abc.fMin( ) >= 0.f )
{
plane_simplex( p1, 0 );
}
else if( out2 && adb.fMin( ) >= 0.f )
{
plane_simplex( p2, 1 );
}
else if( out3 && acd.fMin( ) >= 0.f )
{
plane_simplex( p3, 2 );
}
else if( out4 && dcb.fMin( ) >= 0.f )
{
plane_simplex( p4, 3 );
}
else
{
sigassert( !"WTF?" );
}
}
}
}

break;
}
default:
{
sigassert( !"Should not have gotten here." );
}
}
}


Thanks to Erin's slides this is working very well for non-penetration cases.
[/quote]
Well, note GJK only works for disjoint shapes. In the penetrating case you need something else. I recommend SAT.


Here is my code for solving the 4-simplex: The plBarycentric*() return the devisor separetly and always positiv. The normalization takes place at the end. This makes the logic a ltlle easier. It follows Erins logic of the lower dimenson simplices so it shopuld be easy to follow. I also prefered to have functions for the barycentric coordinates to avoid code duplication/redundancy.



void plSimplex::Solve4( void )
{
// Get simplex
plSimplexVertex A = mVertices[ 0 ];
plSimplexVertex B = mVertices[ 1 ];
plSimplexVertex C = mVertices[ 2 ];
plSimplexVertex D = mVertices[ 3 ];


// Vertex regions
plReal wAB[ 3 ], wAC[ 3 ], wAD[ 3 ], wBC[ 3 ], wCD[ 3 ], wDB[ 3 ];
plBarycentricCoordinates( wAB, A.mPosition, B.mPosition, plVector3::kZero );
plBarycentricCoordinates( wAC, A.mPosition, C.mPosition, plVector3::kZero );
plBarycentricCoordinates( wAD, A.mPosition, D.mPosition, plVector3::kZero );
plBarycentricCoordinates( wBC, B.mPosition, C.mPosition, plVector3::kZero );
plBarycentricCoordinates( wCD, C.mPosition, D.mPosition, plVector3::kZero );
plBarycentricCoordinates( wDB, D.mPosition, B.mPosition, plVector3::kZero );

// VR( A )
if ( wAB[ 1 ] <= 0.0f && wAC[ 1 ] <= 0.0f && wAD[ 1 ] <= 0.0f )
{
// Reduce simplex
mVertexCount = 1;
mVertices[ 0 ] = A;

mLambda[ 0 ] = 1.0f;

return;
}

// VR( B )
if ( wAB[ 0 ] <= 0.0f && wDB[ 0 ] <= 0.0f && wBC[ 1 ] <= 0.0f )
{
// Reduce simplex
mVertexCount = 1;
mVertices[ 0 ] = B;

mLambda[ 0 ] = 1.0f;

return;
}

// VR( C )
if ( wAC[ 0 ] <= 0.0f && wBC[ 0 ] <= 0.0f && wCD[ 1 ] <= 0.0f )
{
// Reduce simplex
mVertexCount = 1;
mVertices[ 0 ] = C;

mLambda[ 0 ] = 1.0f;

return;
}

// VR( D )
if ( wAD[ 0 ] <= 0.0f && wCD[ 0 ] <= 0.0f && wDB[ 1 ] <= 0.0f )
{
// Reduce simplex
mVertexCount = 1;
mVertices[ 0 ] = D;

mLambda[ 0 ] = 1.0f;

return;
}


// Edge regions
plReal wACB[ 4 ], wABD[ 4 ], wADC[ 4 ], wBCD[ 4 ];
plBarycentricCoordinates( wACB, A.mPosition, C.mPosition, B.mPosition, plVector3::kZero );
plBarycentricCoordinates( wABD, A.mPosition, B.mPosition, D.mPosition, plVector3::kZero );
plBarycentricCoordinates( wADC, A.mPosition, D.mPosition, C.mPosition, plVector3::kZero );
plBarycentricCoordinates( wBCD, B.mPosition, C.mPosition, D.mPosition, plVector3::kZero );

// VR( AB )
if ( wABD[ 2 ] <= 0.0f && wACB[ 1 ] <= 0.0f && wAB[ 0 ] > 0.0f && wAB[ 1 ] > 0.0f )
{
// Reduce simplex
mVertexCount = 2;
mVertices[ 0 ] = A;
mVertices[ 1 ] = B;

// Normalize
plReal divisor = wAB[ 2 ];
PL_ASSERT( divisor > 0.0f );

mLambda[ 0 ] = wAB[ 0 ] / divisor;
mLambda[ 1 ] = wAB[ 1 ] / divisor;

return;
}

// VR( AC )
if ( wACB[ 2 ] <= 0.0f && wADC[ 1 ] <= 0.0f && wAC[ 0 ] > 0.0f && wAC[ 1 ] > 0.0f )
{
// Reduce simplex
mVertexCount = 2;
mVertices[ 0 ] = A;
mVertices[ 1 ] = C;

// Normalize
plReal divisor = wAC[ 2 ];
PL_ASSERT( divisor > 0.0f );

mLambda[ 0 ] = wAC[ 0 ] / divisor;
mLambda[ 1 ] = wAC[ 1 ] / divisor;

return;
}

// VR( AD )
if ( wADC[ 2 ] <= 0.0f && wABD[ 1 ] <= 0.0f && wAD[ 0 ] > 0.0f && wAD[ 1 ] > 0.0f )
{
// Reduce simplex
mVertexCount = 2;
mVertices[ 0 ] = A;
mVertices[ 1 ] = D;

// Normalize
plReal divisor = wAD[ 2 ];
PL_ASSERT( divisor > 0.0f );

mLambda[ 0 ] = wAD[ 0 ] / divisor;
mLambda[ 1 ] = wAD[ 1 ] / divisor;

return;
}

// VR( BC )
if ( wACB[ 0 ] <= 0.0f && wBCD[ 2 ] <= 0.0f && wBC[ 0 ] > 0.0f && wBC[ 1 ] > 0.0f )
{
// Reduce simplex
mVertexCount = 2;
mVertices[ 0 ] = B;
mVertices[ 1 ] = C;

// Normalize
plReal divisor = wBC[ 2 ];
PL_ASSERT( divisor > 0.0f );

mLambda[ 0 ] = wBC[ 0 ] / divisor;
mLambda[ 1 ] = wBC[ 1 ] / divisor;

return;
}

// VR( CD )
if ( wADC[ 0 ] <= 0.0f && wBCD[ 0 ] <= 0.0f && wCD[ 0 ] > 0.0f && wCD[ 1 ] > 0.0f )
{
// Reduce simplex
mVertexCount = 2;
mVertices[ 0 ] = C;
mVertices[ 1 ] = D;

// Normalize
plReal divisor = wCD[ 2 ];
PL_ASSERT( divisor > 0.0f );

mLambda[ 0 ] = wCD[ 0 ] / divisor;
mLambda[ 1 ] = wCD[ 1 ] / divisor;

return;
}

// VR( DB )
if ( wABD[ 0 ] <= 0.0f && wBCD[ 1 ] <= 0.0f && wDB[ 0 ] > 0.0f && wDB[ 1 ] > 0.0f )
{
// Reduce simplex
mVertexCount = 2;
mVertices[ 0 ] = D;
mVertices[ 1 ] = B;

// Normalize
plReal divisor = wDB[ 2 ];
PL_ASSERT( divisor > 0.0f );

mLambda[ 0 ] = wDB[ 0 ] / divisor;
mLambda[ 1 ] = wDB[ 1 ] / divisor;

return;
}


// Face regions
plReal wABCD[ 5 ];
plBarycentricCoordinates( wABCD, A.mPosition, B.mPosition, C.mPosition, D.mPosition, plVector3::kZero );

// VR( ACB )
if ( wABCD[ 3 ] < 0.0f && wACB[ 0 ] > 0.0f && wACB[ 1 ] > 0.0f && wACB[ 2 ] > 0.0f )
{
// Reduce simplex
mVertexCount = 3;
mVertices[ 0 ] = A;
mVertices[ 1 ] = C;
mVertices[ 2 ] = B;

// Normalize
plReal divisor = wACB[ 3 ];
PL_ASSERT( divisor > 0.0f );

mLambda[ 0 ] = wACB[ 0 ] / divisor;
mLambda[ 1 ] = wACB[ 1 ] / divisor;
mLambda[ 2 ] = wACB[ 2 ] / divisor;

return;
}

// VR( ABD )
if ( wABCD[ 2 ] < 0.0f && wABD[ 0 ] > 0.0f && wABD[ 1 ] > 0.0f && wABD[ 2 ] > 0.0f )
{
// Reduce simplex
mVertexCount = 3;
mVertices[ 0 ] = A;
mVertices[ 1 ] = B;
mVertices[ 2 ] = D;

// Normalize
plReal divisor = wABD[ 3 ];
PL_ASSERT( divisor > 0.0f );

mLambda[ 0 ] = wABD[ 0 ] / divisor;
mLambda[ 1 ] = wABD[ 1 ] / divisor;
mLambda[ 2 ] = wABD[ 2 ] / divisor;

return;
}

// VR( ADC )
if ( wABCD[ 1 ] < 0.0f && wADC[ 0 ] > 0.0f && wADC[ 1 ] > 0.0f && wADC[ 2 ] > 0.0f )
{
// Reduce simplex
mVertexCount = 3;
mVertices[ 0 ] = A;
mVertices[ 1 ] = D;
mVertices[ 2 ] = C;

// Normalize
plReal divisor = wADC[ 3 ];
PL_ASSERT( divisor > 0.0f );

mLambda[ 0 ] = wADC[ 0 ] / divisor;
mLambda[ 1 ] = wADC[ 1 ] / divisor;
mLambda[ 2 ] = wADC[ 2 ] / divisor;

return;
}

// VR( BCD )
if ( wABCD[ 0 ] < 0.0f && wBCD[ 0 ] > 0.0f && wBCD[ 1 ] > 0.0f && wBCD[ 2 ] > 0.0f )
{
// Reduce simplex
mVertexCount = 3;
mVertices[ 0 ] = B;
mVertices[ 1 ] = C;
mVertices[ 2 ] = D;

// Normalize
plReal divisor = wBCD[ 3 ];
PL_ASSERT( divisor > 0.0f );

mLambda[ 0 ] = wBCD[ 0 ] / divisor;
mLambda[ 1 ] = wBCD[ 1 ] / divisor;
mLambda[ 2 ] = wBCD[ 2 ] / divisor;

return;
}


// *** Inside tetrahedron ***
plReal divisor = wABCD[ 4 ];
PL_ASSERT( divisor > 0.0f );

// VR( ABCD )
mLambda[ 0 ] = wABCD[ 0 ] / divisor;
mLambda[ 1 ] = wABCD[ 1 ] / divisor;
mLambda[ 2 ] = wABCD[ 2 ] / divisor;
mLambda[ 3 ] = wABCD[ 3 ] / divisor;
}


PL_FORCEINLINE
void plBarycentricCoordinates( plReal out[ 3 ], const plVector3& A, const plVector3& B, const plVector3& Q )
{
plVector3 QA, QB, AB;
plVec3Sub( QA, A, Q );
plVec3Sub( QB, B, Q );
plVec3Sub( AB, B, A );

// Last element is divisor
plReal divisor = plVec3Dot( AB, AB );

out[ 0 ] = plVec3Dot( QB, AB );
out[ 1 ] = -plVec3Dot( QA, AB );
out[ 2 ] = divisor;
}

//-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

PL_FORCEINLINE
void plBarycentricCoordinates( plReal out[ 4 ], const plVector3& A, const plVector3& B, const plVector3& C, const plVector3& Q )
{
plVector3 QA, QB, QC, AB, AC;
plVec3Sub( QA, A, Q );
plVec3Sub( QB, B, Q );
plVec3Sub( QC, C, Q );
plVec3Sub( AB, B, A );
plVec3Sub( AC, C, A );

plVector3 QB_x_QC, QC_x_QA, QA_x_QB;
plVec3Cross( QB_x_QC, QB, QC );
plVec3Cross( QC_x_QA, QC, QA );
plVec3Cross( QA_x_QB, QA, QB );

plVector3 AB_x_AC;
plVec3Cross( AB_x_AC, AB, AC );

// Last element is divisor
plReal divisor = plVec3Dot( AB_x_AC, AB_x_AC );

out[ 0 ] = plVec3Dot( QB_x_QC, AB_x_AC );
out[ 1 ] = plVec3Dot( QC_x_QA, AB_x_AC );
out[ 2 ] = plVec3Dot( QA_x_QB, AB_x_AC );
out[ 3 ] = divisor;
}


PL_FORCEINLINE
void plBarycentricCoordinates( plReal out[ 5 ], const plVector3& A, const plVector3& B, const plVector3& C, const plVector3& D, const plVector3& Q )
{
plVector3 AB, AC, AD;
plVec3Sub( AB, B, A );
plVec3Sub( AC, C, A );
plVec3Sub( AD, D, A );

plVector3 QA, QB, QC, QD;
plVec3Sub( QA, A, Q );
plVec3Sub( QB, B, Q );
plVec3Sub( QC, C, Q );
plVec3Sub( QD, D, Q );

// Last element is divisor (forced to be positive)
plReal divisor = plVec3Det( AB, AC, AD );

plReal sign = plMath::Sign( divisor );
out[ 0 ] = sign * plVec3Det( QB, QC, QD );
out[ 1 ] = sign * plVec3Det( QA, QD, QC );
out[ 2 ] = sign * plVec3Det( QA, QB, QD );
out[ 3 ] = sign * plVec3Det( QA, QC, QB );
out[ 4 ] = sign * divisor;
}





There is some room for optimization, but the code shows itself very efficient and I prefer clarity over performance in some cases. The speed-up for making GJK fast comes from warmstarting the simplex (which ghets you down to 1 iteration in many cases) rather then some theoretical optimizations in the simplex solver. Again I recommend looking at Box2D if you are interested in these things.

HTH,
-Dirk
Thank you Dirk. Our methods look similar. I agree with clarity over minor optimizations.

I think i'm going to jerry-rig our hull builder into EPA for completeness. But i'll be considering some of the other techniques. When you say SAT, do you project the shapes onto the separating axes with the same support functions used for gjk? or do you resort to a totally separate system. And i assume that you're suggesting a reduced list of axes to test? such as the randomly distributed method i've read about?

About the code you posted. I'm trying to figure out what [font="CourierNew, monospace"]plVec3Det does? Is this the determinant of a 3x3 matrix?[/font]
[font="CourierNew, monospace"][font="arial, verdana, tahoma, sans-serif"]I can see that dividing at the end could potentially save a lot of unnecessary divisions. I'm not sure how it makes the logic easier to follow though.[/font][/font]
[font="CourierNew, monospace"] [/font]
*edit: scratch that, i've found no need to compute tetrahedron barycentric coordinates*

Also, how do you handle degenerate simplixes? Say you are computing barycentric coordinate for a triangle and you find the triangle has zero area? I currently just try to find an edge with a positive length, compute line segment barycentric coordinates, and return zero for the other vertex. Do you throw out that other vertex or somehow prevent degenerate simplixes before solving?
Personally I don't like EPA and recommend SAT instead. You can indeed use the support functions of the shapes. Rather than testing intervalls you can simply build planes for the features and then get the support in the negative direction of the plane normal. The distance of the support point to your plane is your signed overlap. If you have the support function optimized for SIMD you see it directly in your GJK and SAT. I tested this once and computed for support points, but didn't see much gain of this on the PC. On the console it might make a difference I would assume. So all in theory :)

In order to optimize SAT I use different techniques than those public available, but you should be able to get a reasonable fast SAT using some of the techniques suggested by Pierre Terdiman. See his blog for details.

Right, plVec3Det() simply computes the determinant of the 3x3 matrix defined by three column vectors. This is equivalent to your signed volume function.

One reason for the devisor was that only in the 4D case it can be negative. Sorry, but I don't remember at the moment if there was another reason for this. I think I wanted to postpone the devision until the end. Might be another unecessary optimization. I coded for the consoles when I started and always have these things in mind. Luckily we do only PC at Blizzard and on the PC I find it hard to see any differences for these kind of optimizations.
The planes you're referring to. Are these the faces of the polyhedral shapes? or are these planes constructed from the points on B that make up the simplex, and then check for the support of shape A along this negated plane normal?

If it's the faces of the shapes, do you only check faces that are in the same direction has the vector between object centers? ie faces on the inside of the two objects?
What i ended up doing, that is working really well, is i take the ending simplex, find the unique vertices for both bodies and use those to fill a general SAT collider.

This often ends with a single face on one body and an edge on another, or two edges, or a tetrahedron and an edge. Sometimes weird things happen as the bodys slide and the penetrating contact collides with an "inner edge"

I have the buffer radius turned off so i can thoroughly test the penetration case, but with it this problem should be hard to find.

Thanks for the ideas!
For the separating axes from the faces you simply use the face planes. For the edge cases I construct a plane from one vertex of the edge on body 1 and the cross product of the two edges defines the normal. You have to make sure that the normal points twoard body 2. To do this build the vector from the center of body 1 to one of the edge vertices (on body 1) and dot this with the normal.

I am glad this is working out great for you. One thing to keep in mind is that skipping axes might lead to false positives. So in this sense you need to test all axes to detect a separation. On the other hand not all of these axes must necessarily realize a contact later. Typical examples are compound shapes. Think of two boxes sharing a face. The shared face axes could indeed define a separating axes, but you want to build the contact from another face.

This topic is closed to new replies.

Advertisement