Sign in to follow this  

Best method for 2D collision detection with transformation matrices?

This topic is 3199 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I'm currently making a 2D physics playground style game for my final year project at Uni and I've just got to the point where I can easily add new game objects. Before I do I want to finalize my collision detection approach, instead of botching something together for now and regretting it later. At the moment I've got the collision detection working it's just I want to start using transformation matrices World->View->Projection styley so I can follow the character with the camera (which would be the Projection matrix). As far as I can tell this is going to be a bit of a mission though because the main character of the game is a blob. So say if each game objects coordinates were based around (0,0) which is what they're supposed to be when doing these transformations, does this mean I'd have to perform these steps: 1. Transform the positions to the world coordinates. 2. Check if they're colliding after the transformation. 3. Update the shape of the blob based on the collision. 4. Transform them back to update the original... Because that seems kinda crap and expensive in terms of CPU usage :S I was wondering if I was just better off doing it how I'm planning atm and keeping my collision detection how it is and having a RECT that defines the camera view. Then I could scale/move the world content based on the size and position of this rect? The only reason I was thinking of using the matrices was because it'd prob look more professional technically when I handed it in... What do you guys think? Thanks, Snow.

Share this post


Link to post
Share on other sites
If I understand your question correctly, the math is the same whether you define it as Matrix, or in relation to this View RECT of yours.

In the matrix form, the World transform locates things within the common world frame (that is, relative to one another in the world). The View Transform then translates the entire world frame into the View frame (that is, relative to the camera). Finally, the Projection Transform is what takes things from the View Frame, to the Screen Frame by scaling the normalized coordinates of the View Frame to device coordinates (pixels), centers the View Frame origin on the center of the Screen Frame, and may optionally add perspective (in combination with the view frustum), drop components (orthogonal projection) or other effects.

You can do all this with simple algebra, which if I understand correctly is the form you have now, but the math is essentially the same. Conceptually, the Linear Algebra (matrix) form is cleaner, and should not be noticeably slower -- in fact, because Matrices and Vectors lend themselves to SIMD units like SSE and AltiVec, they can be significantly faster -- There's a optimization study online somewhere, where the naive Matrix-Vector Multiplication was taken from ~100 cycles down to only 17 by intelligent application of SSE assembly and some minor algorithmic modifications.


The part of your post that confuses me is "4. Transform them back to update the original..." What "Original" are you updating exactly? It's neither common, nor a good idea, to modify the vertices of your models in any persistent way. What you load in should remain immutable for the run of the program, only copies should be altered and, in general, these modifications should be recomputed every frame, otherwise your vertices will start to drift away from their original positions due to the limited accuracy of floating point, and accumulated errors.

Share this post


Link to post
Share on other sites
Thanks Ravyne, sorry about the bad explanation in the last post but somehow you understood what I wanted to do ;) Yeh, now I think about it step 4 was a little stupid... I think the thing thats confusing me is that the blobs are deforming objects and the only objects I've worked with when transforming things with matrices have been solid squares and circles etc in a Java module at Uni.

Does this mean that I'll have to store a seperate transformation matrix for every particle in the blob?

Thanks,
Snow.

[Edited by - Sn0w on March 13, 2009 2:42:19 PM]

Share this post


Link to post
Share on other sites
What I would probably do in your blob case, is to create a copy of your blob vertices. I would transform the world-geometry and anything the blob can "blob around" into the blob's Frame (by translating the world by the opposite of the blob's position in the world.) Then, you can do whatever operations you do to deform the copy of the blob. Once you've got your deformed copy, translate that out into the world followed by the View and Projection matrices as usual.

Share this post


Link to post
Share on other sites

This topic is 3199 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this