Android GLES1 2d collisions detection

Started by
2 comments, last by Piotr Podsiad?y 10 years, 11 months ago

So far I can draw textured quads (triangle strips), move them and it all works independently of screen resolution and aspect ratio. I'm hitting a huge snag when it comes to collision detection. My only prior experiences were with fixed resolutions with no rotation.

How do you guys handle it? Is there any easy way to get my objects vertices after a translation and rotation? I know how to peek at the model view matrix and it isn't of much help as far as I can tell. I feel like all of my problems would be solved if I could just get the post-transform coordinates of my vertex array.

Advertisement

So firstly, collision detection should have nothing to do with rendering.

Answer to your question is the Separating Axis Theorem (SAT). You can find good explanations on the internet, so I won't describe it.

As I said, you shouldn't "mix" physics with graphics. You should rather have two separate objects which handle rendering and simulation. Each of them should have a matrix (may be 3x3 matrix) and the vertices should be transformed separatly. In psedo-code it might look like this:


class GameObject
{
public:
	GraphicsComponent graphics;
	PhysicsComponent physics;
	
	void update()
	{
		physics.update(); // check for collisions
		graphics.matrix=physics.matrix; // copy transformation of the physical
                                                                    // representation to graphical representation
		graphics.draw();
	}
};

Now that I'm working out how to keep track of the locations of my objects outside of OGL, another question has popped into my head. Would reloading a fresh set of pre-transformed vertices into OpenGL every frame be significantly slower than using glTranslate, glScale etc?

I'm dealing with 2D here, usually arrays of 12 values.


something like:


draw(){
  vertices = somewhereElse.getVertices();
  vertexBuffer.put(vertices);

  //blah blah blah
   
  gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
  //draw stuff

  //blah blah blah
}

Also, thanks for the SAT advice. I might use something like it.

In case of objects which costst of small number of vertices (like a few squares or something like this) it should have reasonable performance. In fact, I've seen libraries like GWEN, or Allegro5, which transform vertices on CPU, but they also do batching (grouping of multiple objects into one draw call). If you are using textured quads, you would need to implement texture atlases to really benefit from batching.

This topic is closed to new replies.

Advertisement