Jump to content

  • Log In with Google      Sign In   
  • Create Account


Android GLES1 2d collisions detection


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
3 replies to this topic

#1 nkarasch   Members   -  Reputation: 171

Like
0Likes
Like

Posted 30 April 2013 - 02:04 AM

So far I can draw textured quads (triangle strips), move them and it all works independently of screen resolution and aspect ratio. I'm hitting a huge snag when it comes to collision detection. My only prior experiences were with fixed resolutions with no rotation.

 

How do you guys handle it? Is there any easy way to get my objects vertices after a translation and rotation? I know how to peek at the model view matrix and it isn't of much help as far as I can tell. I feel like all of my problems would be solved if I could just get the post-transform coordinates of my vertex array.



Sponsor:

#2 ppodsiadly   Members   -  Reputation: 338

Like
1Likes
Like

Posted 30 April 2013 - 04:11 AM

So firstly, collision detection should have nothing to do with rendering.

Answer to your question is the Separating Axis Theorem (SAT). You can find good explanations on the internet, so I won't describe it.

As I said, you shouldn't "mix" physics with graphics. You should rather have two separate objects which handle rendering and simulation. Each of them should have a matrix (may be 3x3 matrix) and the vertices should be transformed separatly. In psedo-code it might look like this:

class GameObject
{
public:
	GraphicsComponent graphics;
	PhysicsComponent physics;
	
	void update()
	{
		physics.update(); // check for collisions
		graphics.matrix=physics.matrix; // copy transformation of the physical
                                                                    // representation to graphical representation
		graphics.draw();
	}
};

Edited by ppodsiadly, 30 April 2013 - 04:13 AM.


#3 nkarasch   Members   -  Reputation: 171

Like
0Likes
Like

Posted 01 May 2013 - 03:27 AM

Now that I'm working out how to keep track of the locations of my objects outside of OGL, another question has popped into my head. Would reloading a fresh set of  pre-transformed vertices into OpenGL every frame be significantly slower than using glTranslate, glScale etc?

 

I'm dealing with 2D here, usually arrays of 12 values.


something like:

draw(){
  vertices = somewhereElse.getVertices();
  vertexBuffer.put(vertices);

  //blah blah blah
   
  gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
  //draw stuff

  //blah blah blah
}

 

Also, thanks for the SAT advice. I might use something like it.


Edited by nkarasch, 01 May 2013 - 03:39 AM.


#4 ppodsiadly   Members   -  Reputation: 338

Like
0Likes
Like

Posted 02 May 2013 - 02:47 AM

In case of objects which costst of small number of vertices (like a few squares or something like this) it should have reasonable performance. In fact, I've seen libraries like GWEN, or Allegro5, which transform vertices on CPU, but they also do batching (grouping of multiple objects into one draw call). If you are using textured quads, you would need to implement texture atlases to really benefit from batching.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS