Game engine design help!

Started by
9 comments, last by Stani R 13 years, 8 months ago
I keep coming back to game development and keep falling at the same hurdle so it's time I solved this once and for all! Please excuse the wall of text.

I am building a game engine. So far my code goes like this...

Engine creates a game state
game state creates an actor
actor creates triangles
triangles have 3 vertices
vertices have 3 points

The engine also creates a renderer.

The actor has a position x,y,z.
Triangles have no position, their position is defined by the actors position plus the position of their 3 vertices

Actors can also contain other actors (child actors)

The game engine calls render on the state, the state calls render on all its actors and the actors loop over their triangles and queue them inside the renderer object.
Inside the renderer object the triangles are stored inside a vector.

Once all actors have queued their triangles, the game engine then tells the renderer to draw the whole scene.


From this point on I am lost.

Just outputting the triangles to the screen draws them all in the center, since I'm not using the actors position for anything. Even if I add the position of the actor to the triangles position, it doesn't take into account that the actor may have rotated and doesn't help me render child actors which will be relative to the parent actor.

What I *think* I need to do is when render is called on the actor, translate and rotate to the actors position, then multiply each of the triangles vectors by the current model/view matrix. Is it possible to retrieve the resulting x,y and z positions from this multiplication?

for example...

if an actor is at 10.0f, 0.0f, 0.0f
and the first vertex of the actors first triangle is at 2.0f, 0.0f, 0.0f

is it possible to use glTranslatef( 10.0f, 0.0f, 0.0f );
then multiply the position of the vertex by this matrix and retrieve the "global" space coordinates of the vertex as if I hadn't translated?


Please, please, please help. I've been stuck on this for so long and I really would like to move forward,
Advertisement
Quote:translate and rotate to the actors position


I think you mean rotate then translate.

Quote:vertices have 3 points


Bwuh?

Quote:I've been stuck on this for so long and I really would like to move forward


To make a game or an engine? (Note that a game is technically a working engine) If the latter, note that making an actual game may be easier on you and help you design a more generic engine later.
a vertex is a single point.
Also if you are still fresh on some concepts you should focus on making games and not an engine. Engines are just bits of code that you can reuse for convenience. Your own engine will come with time of making games and seeing what code you can transfer between them.
Quote:Original post by Denzin
a vertex is a single point.
Also if you are still fresh on some concepts you should focus on making games and not an engine. Engines are just bits of code that you can reuse for convenience. Your own engine will come with time of making games and seeing what code you can transfer between them.


I second this. I am currently working on my first full-featured engine and it's really made up of refactored bits of my old projects.

=============================RhinoXNA - Easily start building 2D games in XNA!Projects

So, first, like stated above, a vertex is just one point. At minimum the vertex has x, y, z for position in the local space (the coordinate system of the model the vertex belongs to). It can also have a set of x, y, z for defining a normal vector (for lighting) and s, t for texture coordinates.

For positioning your triangles, you don't need to retrieve anything back from OpenGL. For now, if you want to keep drawing in immediate mode like you have been doing, what you can do is simply start with an identity modelview matrix, then translate and rotate to the position of the model. Thus your world and local space become the same, and at this point you can simply draw your triangles using their local coordinates.

Your problems likely stem from a lack of understanding of how OpenGL handles transformations. I suggest you read and re-read chapter 3 of the red book.
Quote:Original post by lightbringer
So, first, like stated above, a vertex is just one point. At minimum the vertex has x, y, z for position in the local space (the coordinate system of the model the vertex belongs to). It can also have a set of x, y, z for defining a normal vector (for lighting) and s, t for texture coordinates.

For positioning your triangles, you don't need to retrieve anything back from OpenGL. For now, if you want to keep drawing in immediate mode like you have been doing, what you can do is simply start with an identity modelview matrix, then translate and rotate to the position of the model. Thus your world and local space become the same, and at this point you can simply draw your triangles using their local coordinates.


The wording in my first post was incorrect, the vertex has 3 floats (x,y and z). I realise that my terminology isn't brilliant.

I could just use rotate and translate but I'd rather just transform the coordinates so that I can easily compare the position of two objects (for collision detection and for working out distances between objects)
Quote:I could just use rotate and translate but I'd rather just transform the coordinates so that I can easily compare the position of two objects (for collision detection and for working out distances between objects)
Although you could do it that way, that's almost never how it's done in practice. You *want* to let the graphics pipeline transform your geometry - that's its job.

As for collision detection, it's quite rare to test for collision on a per-triangle basis, so you really don't need access to the actual world-space geometry for collision detection purposes. (For collision detection, it's far more common to use simple bounding shapes such as spheres, capsules, and boxes for objects, along with various other specialized techniques for terrain and other geometry that can't easily be represented using primitives.)
You can of course build your own ModelView matrix and transform each vertex yourself. But this is not efficient - you should leave this kind of operation to the graphics card, which can perform it in parallel for many of your vertices (you can still compute the MV yourself instead of using glTranslate and glRotate, though). You already have the position and orientation of your actors stored, and that's all you need for distance checks and collision detection (along with a bounding volume which you can compute once at startup by looking at the local coordinates in the actor's mesh for instance).
Quote:Original post by lightbringer
You can of course build your own ModelView matrix and transform each vertex yourself. But this is not efficient - you should leave this kind of operation to the graphics card, which can perform it in parallel for many of your vertices (you can still compute the MV yourself instead of using glTranslate and glRotate, though). You already have the position and orientation of your actors stored, and that's all you need for distance checks and collision detection (along with a bounding volume which you can compute once at startup by looking at the local coordinates in the actor's mesh for instance).


Ok, that makes sense.
What happens when an actor contains another actor though. ie. My player actor is carrying a barrel actor.

I get that I can rotate and transform to draw the player, then again to draw the barrel but how then do I know where the barrels bounding box is since I now have no global position for the barrel?
My two cents: why are you making it all so hard on yourself?
IMO. no game will be making a mesh so that it is a tree of triangles. In a game, a mesh is a list of triangles going almost immediately to the pipeline (it's possible of course to do some pre-calculations on the mesh while you're on the loading-screen). But it's almost never necessary to be able to do stuff with those triangles on the CPU so why bother making that data so accessible?

If you want to do stuff like collision detection, it's usually much better to check for bounding boxes/spheres and sub-bounding boxes/spheres. If after having found a possible colliding object, you need to really find out exactly what triangle was hit, yeah, then you need to access the mesh (or let a shader do it on graphics hardware for you), but chances are great that this is not what you need.

For transformations, same thinking: the hardware is designed to do this very fast for you, send the untransformed mesh to the pipeline and give the pipeline the necessary transformation matrix. If things need to be more complicated, write your own vertex shader in the pipeline. This will be so very much faster than processing the triangles before they are sent to the pipeline.

I understand that you need to study the basics of OpenGL/DX better first. Try to understand the basics but IMO do not try to get too deep into all the thousands of IMO obsolete (for games) states and instructions but try to understand the math behind the 3D as quickly as possible so that you can write your own shaders to strengthen the pipeline.

And my last tip: don't look at your project with vertices as the atoms, but try to approach it with complete meshes (or submeshes) as the atoms. This will almost always be correct.

This will not be something you can do in a week... so take it one step at a time.

This topic is closed to new replies.

Advertisement