How object coordinates are tracked in game engines?

Started by
7 comments, last by Norman Barrows 9 years, 6 months ago
Hi all,
My question is more about the general game engine design concept rather than particular case.
There are many objects in the scene and for some of them the coordinates must be known
after they are rotated and translated. They are necessary to track collisions for instance.
The problem is that graphics libraries hide theese coordinates inside after transformation. For instance in OpenGL after we perform glTranslate/glRotate functions the current matrix holds all transformations but vertexes are multiplied to it internally when we pass vertexes into OpenGL.
It is interesting to me how the object coordinates tracked in the game engine?
There are several ways to do that.
The first one is to use glTranslate/glRotate for positioning objects and transformating the vertexes additionally in the engine code. The disadvantage of this approach is that two the same transformations are done.
Another way is not to use glTranslate/glRotate but do all the transformations manually and pass to the OpengGL library already transformed vertexes.
Are there any other ways?
Advertisement


The first one is to use glTranslate/glRotate for positioning objects and transformating the vertexes additionally in the engine code. The disadvantage of this approach is that two the same transformations are done.

Generally you don't need the "rendering vertices" in the engine code. You need a single position for the object, and some sort of sphere, box or simplified mesh for collisions.

I can't talk about the implementation of any particular engine, but usually your objects have an attribute called position which is a vector3 and it's the point in space where the object's origin will be placed. Since position is not enough the object probably has a quaternion for the orientation, and also a scale vector3.

When you want to draw it you create a transformation matrix with the position, rotation and scale and send the model (untransformed) and the matrix to OpenGL.

For collisions it's much more easy to create a bounding box or bounding sphere if possible, which can be computed once for the original model, and later scaled, rotated and translated like the object's visible model. This way you can check for collisions with a few points compared to a full 3D model.

Transformig everything "manually" seems a waste of time and power. OpenGL already does that transformation, and probably does it quicker than your code (the transformation is done inside the shader, and the GPU runs a lot of them at the same time), so sending the model and the matrix makes more sense. Maybe if you have a static object you can transform it and send the transformed model and the identity matrix, but I don't know if OpenGL is optimized to ignore the matrix*vector operation with an identity matrix. If not you won't be saving much time, only creating the matrix the first time.

My objects have an [X,Y,Z] vector position. Each object also has a bounding region (one for drawing, one for collision detection).

When my game object is in object space, the XYZ position is 0,0,0.

During Update():
If I give it the position 10,10,10, I know that my *object* gets that position.

I also need to apply the same translation to my bounding regions. In my game, I automatically change the bounding region position any time the object position is changed (don't do this at draw time or your collision volumes won't work). All of my object rotation and scaling are done here as well and applied to the collision volume.
Then I process my objects for collision using the updated collision volumes.

During draw():
The mesh used to draw the object is still in its own local space, ideally at 0,0,0. I then just have to apply a translation matrix to the mesh to get it to align with my object position.

Since noone else mentioned it: no engine users glTranslate or the like. The GL matrix stack is deprecated, slow, and not stored to how games build matrices. Modern GL indie code typically uses glm for math. Bigger engines often have their own math libraries. In modern graphics you just upload complete matrices to the GPU, you don't use the graphics library to construct m.

As others stayed, the matrices are constructed from separate transform data usually. Matrices ate great for graphics work (usually) but often not what you want in gam e code. Physics will typically be in a separate library (Box2D, Bullet, Havok, etc) and deal with transform data on its own.

Sean Middleditch – Game Systems Engineer – Join my team!

Thanks to all for reply,

I'm using some sort of bounding cube to detect collisions but the vertexes of that cube must be transformed.

I'm thinking how to transform that vertexes.

GLM library seems to calculate matrices on using shaders so it is impossible to get result from it.

Turn your thinking upside down. Instead of transforming many thousands of vertices from the local object coordinate system in to world position, apply the opposite transformation and transform the player into object space.


GLM library seems to calculate matrices on using shaders so it is impossible to get result from it.

Sounds like you haven't actually tried using it yet. The GLM library produces structures that are compatible with GLSL and can be used easily with OpenGL.But it's a math library and supports vector operations such as addition and transformation by matrices. E.g. v_rotated = m_rotation_mat * v_unrotated.

That being said, with regard to your question about game engine design, collision detection and rendering graphics are entirely separate tasks. Collision detection is (commonly) CPU intensive; rendering is GPU intensive; and the engine should be designed for efficiency in those worlds separately. Those tasks should be separated and may very well use different math libraries for support.

EDIT: GLM does provide some collision detection related support via the GTX extensions - intersectLineTriangle, etc. However, if you're interested in collision detection in general, you may want to take a look at open source collision and/or physics libraries to see how a collision engine can be implemented.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

most games have some sort of "world coordinate system". it may be identical to the coordinate system used by the graphics, or it may not be. it tracks the location and orientation of objects in the game world. this info is then used to render and for movement and collision detection. render and physics may be entirely different 3rd party libraries from the "world coordinate system" used.

in your particular case, you want to replicate some of the transforms the graphics engine does to do collision detection using non-axis aligned bounding boxes.

first: is non-AABB necessary? can you get away with AABB or Bsphere? (both of which should be faster and easier).

if not, i personally use D3DXVec3Transform. (transforms a point by a matrix, openGL should have an equivalent). i create the transform matrix for the object, then multiply all eight points of the bounding box by the transform matrix to get the oriented bounding box. but i only use this for static objects (boulders), so i'm not recomputing BBoxes every frame, just once at game start.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

This topic is closed to new replies.

Advertisement