• Create Account

## help: advancing part of a frame

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

12 replies to this topic

### #1maxgpgpu  Members

197
Like
0Likes
Like

Posted 19 August 2011 - 11:29 PM

I have finally arrived at that fateful point in my engine development when I need to make objects collide "correctly". Both broad phase and narrow phase collision detection are working fine now, for both "convex" and arbitrarily irregular "concave" objects. That was enough challenge and work by itself, but now even more "fun" appears.

Having detected that a collision has happened, I need to back up the objects to somewhere between the previous frame (where the objects were not in collision) and the current frame (where they are in collision). I have the transformation matrix for last frame and for this frame. The question is, can I somehow create a new transformation matrix that is effectively a part way "interpolation" between the previous frame transformation matrix and the current frame transformation matrix (for some arbitrary fraction of 1 frame time)?

The problem is this. The transformation matrix is usually modified by only one or two transformations (one rotation and/or one translation). However, there is no limitation on how many rotations and translations can be performed between frames. Therefore, to try the alternative to "interpolating the matrix" I would have to queue up some representation of all rotations and translations (at least) between frames and keep them available for all objects, just in case they encountered a collision next frame.

Clearly it would be more convenient to interpolate matrices if that works.

I guess the alternative is to interpolate the intermediate matrix, if that is more practical. In other words, during any frame, all rotations and translations are applied to a clean [identity] matrix to accumulate the series of rotations and translations. When it is time to translate the vertices, that intermediate matrix is multiplied by the main transformation matrix that transforms local-coordinates into world-coordinates. Since that intermediate matrix ONLY contains information about the transformations happening this frame, perhaps that intermediate matrix is an easier item to interpolate.

I managed to get all the matrix math working, but definitely can't say I'm an expert at matrix math! If I was, I'd intuitively know how to solve this problem, and I don't. Hence my question.

I assume everyone who ever faces collision detection and response encounters this same problem. How is this done?

### #2clb  Members

2143
Like
1Likes
Like

Posted 20 August 2011 - 12:26 AM

Having detected that a collision has happened, I need to back up the objects to somewhere between the previous frame (where the objects were not in collision) and the current frame (where they are in collision). I have the transformation matrix for last frame and for this frame. The question is, can I somehow create a new transformation matrix that is effectively a part way "interpolation" between the previous frame transformation matrix and the current frame transformation matrix (for some arbitrary fraction of 1 frame time)?

The usual way to approach this problem is to not do this backing up at the matrix level, but to go back to the physics subsystem, and apply a physics update with half the delta-time of the whole frame. Of course you can interpret the data from matrices, but it will require some conversions.

The problem is this. The transformation matrix is usually modified by only one or two transformations (one rotation and/or one translation). However, there is no limitation on how many rotations and translations can be performed between frames. Therefore, to try the alternative to "interpolating the matrix" I would have to queue up some representation of all rotations and translations (at least) between frames and keep them available for all objects, just in case they encountered a collision next frame.

Clearly it would be more convenient to interpolate matrices if that works.

The usual way to interpolate between matrices which represent affine orthogonal bases is to decompose the matrix to a series of translation (float3), rotation (quaternion) and scale (float, or float3 depending on if you allow nonuniform scale). Then, you linearly interpolate the float3's, and slerp the quaternions. To decompose a matrix to scale, rotate and translate (assuming no shear), you read the 3x1 column vector from the top three elements of the fourth column (transpose that if working on v*M notation, like in D3D), then extract the magnitudes of the three first columns for scale, and finally convert the normalized 3x3 upper part to a quaternion (see Q55 at this page).

After interpolating the components individually, you compose the interpolated data back to a matrix form.

I guess the alternative is to interpolate the intermediate matrix, if that is more practical. In other words, during any frame, all rotations and translations are applied to a clean [identity] matrix to accumulate the series of rotations and translations. When it is time to translate the vertices, that intermediate matrix is multiplied by the main transformation matrix that transforms local-coordinates into world-coordinates. Since that intermediate matrix ONLY contains information about the transformations happening this frame, perhaps that intermediate matrix is an easier item to interpolate.

I don't think working on an identity matrix for the duration of that frame makes it any more easier, since if we denote by A the local->world matrix at the start of the frame, and by B the local->world matrix at the end of the frame, then you can compute B * A^{-1} to get the difference between A and B, which is exactly the series of transformations you applied to A during this frame, to get B. So you don't need to explicitly store a separate clean identity matrix on top of which to apply the per-frame transformations, you can always compute it if you know A and B.

Me+PC=clb.demon.fi | C++ Math and Geometry library: MathGeoLib, test it live! | C++ Game Networking: kNet | 2D Bin Packing: RectangleBinPack | Use gcc/clang/emcc from VS: vs-tool | Resume+Portfolio | gfxapi, test it live!

### #3maxgpgpu  Members

197
Like
0Likes
Like

Posted 20 August 2011 - 06:34 PM

Having detected that a collision has happened, I need to back up the objects to somewhere between the previous frame (where the objects were not in collision) and the current frame (where they are in collision). I have the transformation matrix for last frame and for this frame. The question is, can I somehow create a new transformation matrix that is effectively a part way "interpolation" between the previous frame transformation matrix and the current frame transformation matrix (for some arbitrary fraction of 1 frame time)?

The usual way to approach this problem is to not do this backing up at the matrix level, but to go back to the physics subsystem, and apply a physics update with half the delta-time of the whole frame. Of course you can interpret the data from matrices, but it will require some conversions.

What do you mean by "apply a physics update with half the delta-time of the whole frame"? The physics subsystem takes linear forces, rotational forces and interval, converts that into altered linear and rotational velocity and acceleration, and updates the object by performing rotation and translation operations. Therefore, if I understand you, what you suggest requires I queue up the set of physical influences that acted upon the objects the previous frame, rather than queue up the rotations and translations those influences cause. Is that correct? That might have some advantages (I'll have to think more carefully and deeply on this question), but it doesn't seem to make the process any easier. I'd still need to have the transformation matrix from the previous frame (no problem), plus the queue of physical influences applied the previous frame. I guess that does avoid having to perform an interpolation (which is good), but otherwise doesn't save any work. Also, the number of potential physical influences is much greater than one rotation/translation matrix, so it could definitely be more work to perform. While your way definitely has the advantage of purity (being grounded in the physical phenomenon directly rather than indirectly through the matrix), performing those physics processes will often require much more compute power than fiddling a matrix. Normally I wouldn't worry about this, but if we must perform these physical process computations many times as we interpolate to find the time and positions of first-contact, we would be executing those [sometimes rather involved and compute-intensive] physics processes many times. I worry this might become a "frame buster" (slow the frame time so much that we miss the next vertical retrace), hence my preference to find a way to interpolate matrices or something simpler than repeating all the physics processes many times per frame for the two colliding objects.

The problem is this. The transformation matrix is usually modified by only one or two transformations (one rotation and/or one translation). However, there is no limitation on how many rotations and translations can be performed between frames. Therefore, to try the alternative to "interpolating the matrix" I would have to queue up some representation of all rotations and translations (at least) between frames and keep them available for all objects, just in case they encountered a collision next frame.

Clearly it would be more convenient to interpolate matrices if that works.

The usual way to interpolate between matrices which represent affine orthogonal bases is to decompose the matrix to a series of translation (float3), rotation (quaternion) and scale (float, or float3 depending on if you allow nonuniform scale). Then, you linearly interpolate the float3's, and slerp the quaternions. To decompose a matrix to scale, rotate and translate (assuming no shear), you read the 3x1 column vector from the top three elements of the fourth column (transpose that if working on v*M notation, like in D3D), then extract the magnitudes of the three first columns for scale, and finally convert the normalized 3x3 upper part to a quaternion (see Q55 at this page).

After interpolating the components individually, you compose the interpolated data back to a matrix form.

Thanks, I'll study those references. Hopefully I can avoid quaternions (I managed to so far). My engine runs on OpenGL and has column matrices.

I guess the alternative is to interpolate the intermediate matrix, if that is more practical. In other words, during any frame, all rotations and translations are applied to a clean [identity] matrix to accumulate the series of rotations and translations. When it is time to translate the vertices, that intermediate matrix is multiplied by the main transformation matrix that transforms local-coordinates into world-coordinates. Since that intermediate matrix ONLY contains information about the transformations happening this frame, perhaps that intermediate matrix is an easier item to interpolate.

I don't think working on an identity matrix for the duration of that frame makes it any more easier, since if we denote by A the local->world matrix at the start of the frame, and by B the local->world matrix at the end of the frame, then you can compute B * A^{-1} to get the difference between A and B, which is exactly the series of transformations you applied to A during this frame, to get B. So you don't need to explicitly store a separate clean identity matrix on top of which to apply the per-frame transformations, you can always compute it if you know A and B.

I think we can assume no shear or scale operations on the frame before a collision. While I do support shear, skew and [non-linear] scale operations, I purposely handle those operations differently than most engines. In my engine, shear, skew and scale operations do not alter the transformation matrix. This eliminates hassles transforming surface vectors (normal, tangent, bi-tangent), and eliminates any need to carry along (or compute on demand) any inverse transformation matrices. Thus, my transformation matrices are as clean as can be, since nothing but rotations and translations alter them.

Thanks for the ideas and links.

### #4maxgpgpu  Members

197
Like
0Likes
Like

Posted 22 August 2011 - 10:55 PM

After some further thought, I realized how dumb my first response paragraph was. Doh!

In the previous frame, some [potentially large] number of physical influences each generated a linear and rotational force on the object. Those forces essentially got vector summed to generate a single linear force vector and a single rotational force vector. From those two vectors the linear and rotational velocity and acceleration got updated, and from that, one translation and on rotation operation was generated and then applied to the object.

Which means, I can back up to an arbitrary in-between frame time by taking those two force vectors, multiplying them by a fraction, then recomputing a new translation and rotation vector to apply to the previous-frame local-to-world transformation matrix. Bingo.

While that's not quite as fast as what I was hoping for, it is fast enough and doesn't require the engine queue up an unknown and arbitrary length set of rotation and translation operations (or physical force computations). The great thing about those two force vectors is... they can be fractionalized in the easy way to produce a correct intermediate-frame result.

In other words, you were right, and I just didn't see why at first... probably because I thought you intended me to back up further into the physics engine than I thought (go back all the way and recompute all the forces). But why bother? We can take an arbitrary fraction of the result force vectors and work from there. Great news. Thanks.

### #5wildbunny  Members

550
Like
1Likes
Like

Posted 24 August 2011 - 12:31 PM

Are you sure you really want to be backing up the entire simulation to find the collision time? There are much nicer (i.e quicker) ways to achieve the same result these days

For example, you can use conservative advancement to get the TOI without backing everything up.

http://www.continuou...onDetection.pdf

http://www.wildbunny...on-for-dummies/

And/or you can use speculative contacts if you want an even faster approximation (which was good enough for Little Big Planet and other AAA titles)

http://www.wildbunny...pproach-part-1/

Both of these techniques work in 2d or 3d...

Cheers, Paul.

### #6maxgpgpu  Members

197
Like
0Likes
Like

Posted 25 August 2011 - 08:29 PM

Are you sure you really want to be backing up the entire simulation to find the collision time? There are much nicer (i.e quicker) ways to achieve the same result these days

For example, you can use conservative advancement to get the TOI without backing everything up.

http://www.continuou...onDetection.pdf

http://www.wildbunny...on-for-dummies/

And/or you can use speculative contacts if you want an even faster approximation (which was good enough for Little Big Planet and other AAA titles)

http://www.wildbunny...pproach-part-1/

Both of these techniques work in 2d or 3d...

Cheers, Paul.

Wow, lots of great material. Thanks. And on top of umpteen PDF files I already have on this topic.

Tell me though. Compared to the techniques described above, what's the drawback of tracing back through the minkowski sum along the relative motion vector until we find the outer surface (can't go any further), and taking the length of that vector as the distance to backtrack to find approximate "first contact"? Well, that doesn't much help with rotation unless somehow we're recomputing the minkowski sum as we go (which I guess is possible, but obviously slower).

I have read your material, and lots of other internet-downloaded PDFs on the same topic, but it will take a fair quantity of my brain grappling with all these approaches to figure out the relative merits, drawbacks and tradeoffs. Thanks for the information - I just wish you could transfer your comprehension and clarity as easily! :-)

### #7wildbunny  Members

550
Like
1Likes
Like

Posted 26 August 2011 - 03:34 AM

Tell me though. Compared to the techniques described above, what's the drawback of tracing back through the minkowski sum along the relative motion vector until we find the outer surface (can't go any further), and taking the length of that vector as the distance to backtrack to find approximate "first contact"?

The drawback is that you're doing things the wrong way around if you do it like this - letting objects get into penetration and then back-tracking is the same amount of work as firing a ray from the origin to the MD of the two shapes along the delta velocity (as described by Gino Van Den Bergen), and getting the point and time of first contact. If you do it like that, you don't need to worry about back-tracking at all and you'd save yourself the first collision check which tells you that you need to back-track

Well, that doesn't much help with rotation unless somehow we're recomputing the minkowski sum as we go (which I guess is possible, but obviously slower).

That's what conservative advancement does. Yes, its slower but it works very well, and if you only need an approximate result, just run it for a couple of iterations - likelihood is, objects are moving/rotating so fast that you can't see the slight error anyway.

Cheers, Paul.

### #8maxgpgpu  Members

197
Like
0Likes
Like

Posted 26 August 2011 - 01:23 PM

Tell me though. Compared to the techniques described above, what's the drawback of tracing back through the minkowski sum along the relative motion vector until we find the outer surface (can't go any further), and taking the length of that vector as the distance to backtrack to find approximate "first contact"?

The drawback is that you're doing things the wrong way around if you do it like this - letting objects get into penetration and then back-tracking is the same amount of work as firing a ray from the origin to the MD of the two shapes along the delta velocity (as described by Gino Van Den Bergen), and getting the point and time of first contact. If you do it like that, you don't need to worry about back-tracking at all and you'd save yourself the first collision check which tells you that you need to back-track

Well, that doesn't much help with rotation unless somehow we're recomputing the minkowski sum as we go (which I guess is possible, but obviously slower).

That's what conservative advancement does. Yes, its slower but it works very well, and if you only need an approximate result, just run it for a couple of iterations - likelihood is, objects are moving/rotating so fast that you can't see the slight error anyway.

Cheers, Paul.

I much appreciate your feedback, and I do intend to read and re-read and re-re-read all the descriptions I can find before I make any final decisions. But with great respect for your experience with this, I worry about any scheme that depends on pre-emptively finding collisions before they happen. I mean, that can probably work for a great many games, and maybe on 99.9% of instances in every game, but what happens when a game character kicks a socker ball or shoots an arrow or gun, or something blows up. What I mean is, sometimes objects will go from zero to FAST (and a significant distance) in less than one frame. And other cases exist too, including multiple ricochets of multiple objects that then collide on the next frame. How on earth could any engine ever anticipate all these cases? And if they can't, I guess you must be saying the engine needs two schemes, one predicting collisions and another for when that doesn't work. But if the later algorithm works effectively, why implement the former?

OTOH, I have a feeling I'm misunderstanding some very significant detail about the scheme you mention above (Gino). It sounds like you're saying "once you find a collision you go back to the object states in the previous frame and compute forward to find first contact rather than compute backwards from penetration. If so, I agree that seems like an identical amount of work (and pretty much the exact same code). However, I don't see how that gives any better or worse estimate of first contact time or position than back-tracking. But you seem to be saying a collision is never checked-for or detected at all? If so, how does the program know which objects to trace forward to [possibly] find first contact time? Sorry if I'm asking questions that should be clear after reading his paper. Probably I'm reading too many papers on the general topic and can't keep any single approach clear enough in my mind. But I can't see how the initial collision check can be avoided without needing to perform n-squared of these processes (one for every object-pair combination). I'll go back and re-read Gino again.

### #9wildbunny  Members

550
Like
1Likes
Like

Posted 28 August 2011 - 05:01 AM

gun, or something blows up. What I mean is, sometimes objects will go from zero to FAST (and a significant distance) in less than one frame. And other cases exist too, including multiple ricochets of multiple objects that then collide on the next frame. How on earth could any engine ever anticipate all these cases?

'How' is that they first of all bound the motion of the object in question with an AABB. So you know where the object is now, and you know where it will be next frame (i.e. very far away), you just stick an AABB around the entire motion for the object, do the same for all other objects in the game and then you have the set of all possible interactions.

Regarding ricochets, all you need is to bound what 'could happen' with the AABB overlaps, this gives you your interaction islands. Then you let the solver run and find a feasible solution. If you use speculative contacts in the solver, you can move the TOI impact problem out of the collision code and into the physics solver. Check my blog article for details

I just wanted to leave you with a thought: your current technique of back-tracking from penetration - how do you deal with the bullet through paper problem? Seems like you would miss these collisions... That's one of the main reasons to go forward in time, not back-tracking

Cheers, Paul.

### #10maxgpgpu  Members

197
Like
0Likes
Like

Posted 29 August 2011 - 04:39 AM

gun, or something blows up. What I mean is, sometimes objects will go from zero to FAST (and a significant distance) in less than one frame. And other cases exist too, including multiple ricochets of multiple objects that then collide on the next frame. How on earth could any engine ever anticipate all these cases?

'How' is that they first of all bound the motion of the object in question with an AABB. So you know where the object is now, and you know where it will be next frame (i.e. very far away), you just stick an AABB around the entire motion for the object, do the same for all other objects in the game and then you have the set of all possible interactions.

Regarding ricochets, all you need is to bound what 'could happen' with the AABB overlaps, this gives you your interaction islands. Then you let the solver run and find a feasible solution. If you use speculative contacts in the solver, you can move the TOI impact problem out of the collision code and into the physics solver. Check my blog article for details

I just wanted to leave you with a thought: your current technique of back-tracking from penetration - how do you deal with the bullet through paper problem? Seems like you would miss these collisions... That's one of the main reasons to go forward in time, not back-tracking

Cheers, Paul.

Okay, I'm trying to get my head around this. It sounds like your fundamental "frame loop" (or "game loop") is constructed differently. Maybe that will help make your approach work better. With the addition of collision-detection and collision-response, my current approach is indeed stuck with a very unpleasantly long serial set of steps before the engine can draw. Otherwise a collision might move an object after it has been rendered, and the only way to fix that is to clear that back-buffer and redraw everything again. That would be very unpleasant and too slow. So I'm trying to see whether you have a way around this by re-organizing your frame loop. Hmmmm.

#00: frame++;
#01: process network input (generates messages in message queue)
#02: process OS event queue (generates messages in message queue)
#03: process client application (generates messages in message queue)
#04: process message queue (generates and sums forces on objects)
#05: convert linear/rotational forces into new linear/rotational velocities
#06: from #05 compute object rotation/translation this frame
#07: transform object vertices => VBOs.
#08: perform collision-detection (identifies intersecting objects)
#09: perform collision-response (modify forces on objects)
#10: if any collision-response modified any object forces, for those objects only
--- A: convert linear/rotational forces into new linear/rotational velocities
--- B: compute object rotation/translation for partial frame (starting at collision TOI)
--- C: transform object vertices => VBOs
--- D: hopefully no need to recursively perform collision-detection/response and #10
#11: draw VBOs.
#12: swap buffers
#13: clear back buffer
#14: go to #00

Though I've left out a few low-overhead subsystems (sound, for instance), that's about it. So... now that I look at it, I don't see how you're modifying the frame-loop, except you're performing collision-detection on a swept volume. So I guess that means you are performing classic collision-detection on those swept volumes, which means you have completely transformed the objects to their next frame to create the swept-sum minkowski sum/difference (whichever you prefer to call it) in order to create the minkowski sums of the swept volumes. Then when you find they are in collision, you perform the forward looking pseudo-iteration (maybe done analytically when possible) from the last frame state to determine time of first impact before this frame. That is, if I understand your comments correctly.

If so, you indeed are not backtracking, and don't need to, though it is difficult to believe information about the object intersections doesn't or wouldn't help your computations in some way, if only "corroboration" of the [hopefully] analytic solution.

I do not compute swept volumes, though I certainly could. Instead, I note simple facts about "bullet through paper" type problems. Namely, for object A to have passed through object B, object A must move to opposite sides of object B on all three world-coordinate axes (X, Y, Z) - where "overlapping" the other object counts as "on the opposite side" for purposes of this computation. That does not prove the objects collided, but provides a trivial analytical way to find the time of closest approach [and impact] where the objects can be moved and checked for impact or separation, or the swept volumes could be checked from the first place the bounding spheres/AABBs touched to the last place the bounding spheres/AABBs touched, ala what you suggest. My main "justification" for this approach is that the number of false collisions generated by performing collision detection on swept volumes can be very high, especially when performed on every moving object every frame.

I'm slowly getting my head around the other techniques, but I'm not yet sure there is much difference. Unfortunately, the biggest hassle is we can't draw the damn VBOs until we're sure all collisions are resolved, unless we decide to tolerate up to one frame of "total BS" in cases of collision, and hope "nobody much notices". That might almost work at 60FPS, but probably not so well at 30FPS (or 20FPS or 15FPS).

So I'm almost there, but not quite. Maybe a couple more runs through all the material will crystalize the various approaches better, though the more I look, the more similar they seem (and similar to what I'm doing already, though not exactly).

My other problem, of course, is that I'm not naturally fluent in math. I guess I've done a lot of what some people might consider "fancy math" from time to time, but that experience is always like a brutal war for me, not a trivial and natural exercise like it is to a natural math whiz. So the math of an analytical solution of rotating minkowski volumes scares the crap outta me, and I'll need to be really convinced that is the best approach before I decide to fight another brutal war - and possibly end up in a grave this time!

### #11wildbunny  Members

550
Like
1Likes
Like

Posted 29 August 2011 - 08:25 AM

Ok, lets visit the update loop for the physics engine I describe in my angry birds tutorials:

public function Update(dt:Number):void
{
//
// add gravity and generate motion bounds
//

for ( var i:int=1; i<m_rigidBodies.m_Num; i++ )
{
var p:RigidBody = m_rigidBodies.Get(i);

// apply gravity

// generate new motion bounds for collision
p.GenerateMotionBounds( dt );
}

//
// collision detection
//

for ( i=1; i<m_rigidBodies.m_Num-1; i++ )
{
var rbi:RigidBody=m_rigidBodies.Get(i);

for ( var j:int=i+1; j<m_rigidBodies.m_Num; j++ )
{
var rbj:RigidBody=m_rigidBodies.Get(j);

//
//

var overlap:Boolean = AABB.Overlap( rbi.m_MotionBounds, rbj.m_MotionBounds );

if ( overlap )
{
//
// narrow phase
//

...
}
}
}

//
// solve all constraints
//

if (m_contactPool.m_Num > 0 || m_joints.m_Num > 0)
{
Solver.Solve(m_contactPool, m_joints, dt, Constants.kNumIterations);
}

//
// integrate forward in time
//

for ( i=1; i<m_rigidBodies.m_Num; i++ )
{
p=m_rigidBodies.Get(i);

p.Integrate( dt );
}
}

I have simplified this down to make it more readable. This is only the physics part, so you'd do your rendering afterwards.

Note the call to GenerateMotionBounds() in the first loop where I'm applying gravity. Looking at the code I am actually transforming all the points to generate the AABB for the predicted next frame and then combining the current frame's AABB with next frames to generate the motion bounds. However, you might just be able to get away with transforming the AABB verts and then generating a new AABB from that and the current frame. It wasn't a performance concern for me, so I never attempted that.

As you can see, the broad-phase is just AABB vs AABB intersection, which is very simple.

If you have a lot of objects, you absolutely need something more intelligent than a simple nested for loop - I've used Sort & Sweep in the past and hash-grid. Hash-grid is great if your objects are similar sizes. AABB trees are the latest kid on the block in that regard, although I've never used them. The goal is to only consider objects which have moved - objects which are stationary should do little to no work.

for object A to have passed through object B, object A must move to opposite sides of object B on all three world-coordinate axes (X, Y, Z)

Is that true? What about if object A is moving in only one axis?
I'm not a big fan of complex maths either - luckily collision detection is all geometry which I can get my head around quite nicely

Cheers, Paul.

### #12maxgpgpu  Members

197
Like
0Likes
Like

Posted 31 August 2011 - 12:18 PM

Ok, lets visit the update loop for the physics engine I describe in my angry birds tutorials:

public function Update(dt:Number):void
{
//
// add gravity and generate motion bounds
//

for ( var i:int=1; i<m_rigidBodies.m_Num; i++ )
{
var p:RigidBody = m_rigidBodies.Get(i);

// apply gravity

// generate new motion bounds for collision
p.GenerateMotionBounds( dt );
}

//
// collision detection
//

for ( i=1; i<m_rigidBodies.m_Num-1; i++ )
{
var rbi:RigidBody=m_rigidBodies.Get(i);

for ( var j:int=i+1; j<m_rigidBodies.m_Num; j++ )
{
var rbj:RigidBody=m_rigidBodies.Get(j);

//
//

var overlap:Boolean = AABB.Overlap( rbi.m_MotionBounds, rbj.m_MotionBounds );

if ( overlap )
{
//
// narrow phase
//

...
}
}
}

//
// solve all constraints
//

if (m_contactPool.m_Num > 0 || m_joints.m_Num > 0)
{
Solver.Solve(m_contactPool, m_joints, dt, Constants.kNumIterations);
}

//
// integrate forward in time
//

for ( i=1; i<m_rigidBodies.m_Num; i++ )
{
p=m_rigidBodies.Get(i);

p.Integrate( dt );
}
}

I have simplified this down to make it more readable. This is only the physics part, so you'd do your rendering afterwards.

Note the call to GenerateMotionBounds() in the first loop where I'm applying gravity. Looking at the code I am actually transforming all the points to generate the AABB for the predicted next frame and then combining the current frame's AABB with next frames to generate the motion bounds. However, you might just be able to get away with transforming the AABB verts and then generating a new AABB from that and the current frame. It wasn't a performance concern for me, so I never attempted that.

As you can see, the broad-phase is just AABB vs AABB intersection, which is very simple.

If you have a lot of objects, you absolutely need something more intelligent than a simple nested for loop - I've used Sort & Sweep in the past and hash-grid. Hash-grid is great if your objects are similar sizes. AABB trees are the latest kid on the block in that regard, although I've never used them. The goal is to only consider objects which have moved - objects which are stationary should do little to no work.

for object A to have passed through object B, object A must move to opposite sides of object B on all three world-coordinate axes (X, Y, Z)

Is that true? What about if object A is moving in only one axis?
I'm not a big fan of complex maths either - luckily collision detection is all geometry which I can get my head around quite nicely

Cheers, Paul.

I don't see any significant difference, so I guess all we're talking about is the original topic... namely how to find the time [and position] of first contact (or as it seems to be called, "TOI").

Yes, my engine needs to support lots and lots of objects, because it is designed to support a wide variety of applications, including very sophisticated and compute-intensive ones. My vertex transformation routines automatically create AABB for every object with only 6 assembly-language instructions (and only 3 instructions once 64-bit mode is working), so comparing AABBs is indeed extremely efficient.

My current "broad-phase" collision detection is a modified (optimized) "sweep and pune". At least, I think that's what they call it. Each frame it sorts the "beginning" and "end" markers for each object (in world-coordinates). My tests indicated that the usual way of doing this (sorting on all 3 axes) is a waste of time. Instead, I sort on the most efficient axis (almost always the x-axis), then perform two simple comparisons on the y-axis and z-axis to determine "object-pair overlap"... or not. This is much faster because no processing time is consumed on the other two axes for the vast majority of objects because they do not overlap on the sort axis. I then follow up with a blazing fast implemenation of GJK on the list of potentially colliding object pairs.

Unlike other engines, I also have an "ANY" collision detection routine that performs exactly correct collision-detection on arbitrarily irregular "concave" objects. But that routine is 2x to 20x (or more) slower than GJK, so it is only executed when GJK says the convex hulls are in collision AND the object is not convex (or not known to be convex). I'm working on speeding this more general collision detection routine up, but my GJK is so damn fast (thanks to shane), that I'm sure the concave routine will never be as fast.

As you mentioned, my engine doesn't bother to test object-pairs when neither has been modified since the previous frame (moved/rotated/etc).

Yes, the test I mention to avoid "bullet through paper" works fine. Actually, it is pretty much equivalent to an AABB test. Think about it this way.

if ((objectA.maxx < objectB.minx) || (objectB.maxx < objectA.minx)) for both last frame and this frame, there was no collision or "bullet through paper".

if ((objectA.maxy < objectB.miny) || (objectB.maxy < objectA.miny)) for both last frame and this frame, there was no collision or "bullet through paper".

if ((objectA.maxz < objectB.minz) || (objectB.maxz < objectA.minz)) for both last frame and this frame, there was no collision or "bullet through paper".

That should be pretty obvious. If an object is always "above", "below", "leftward", "rightward", "nearer" or "further" than another, it cannot collide.

The way I stated it is equivalent. For any "bullet" to pass through any "paper", it must pass from "above to below" AND "leftward to rightward" AND "nearer to further". Otherwise it is always on one side or other of the other object.

Unfortunately, you omitted half the original sentence, which might be confusing you. It said that overlapping on any axis constitutes "passing through" from "above to below" or "leftward to rightward" or "nearer to further". In other words, the tests are not > or <, then are >= or <=.

Therefore, to take your case, objectA stays at 0,0,0 on both frames, and objectB moves from -1,0,0 to +1,0,0 (moves only on one axis). That is counted as a possible collision because the objects overlapped in y and z (on both frames, in fact, though overlap on only one frame is sufficient). I hope that clears that up. And if you look, it is indeed equivalent to testing AABBs (even in the number of operations to perform, it appears).

I still need to look harder at that rotating minkowski sum technique, and wrap my brain around it somehow... hopefully. Either that, or hire you to code this part of my engine! :-) As if I could afford to.

### #13wildbunny  Members

550
Like
0Likes
Like

Posted 01 September 2011 - 02:15 AM

Ok, sounds like you're most of the way there, then

The only thing which worries me is your description of step #10 A B C D in the frame loop you posted. There should be no need to do anything recursive in that loop, and also no need to to advance part of a frame, certainly not for the whole scene.

Also, no need to hire me to figure out how the rotating minkowski sum stuff works, you can buy the source code to my examples on that blog page and have something solid to work from

Cheers, Paul.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.