3d translation

Started by
11 comments, last by Scavenger23 17 years, 10 months ago
I am writing a simple 3d engine from scratch. I have the rotation of the object working, it will rotate around it's orgin. But it will always stay in the center of the screen, no matter what direction the camera is pointing. This is my rendering code:

// p_num is the point number
// it will go through all of the points in the model and draw them
for(p_num=0; p_num<points; p_num+=1)
{
dest_x = global.cam_x-x+X[p_num]; // the point to rotate on
dest_y = global.cam_y-y+Y[p_num];
dest_z = global.cam_z-z+Z[p_num];

temp_y = (dest_y*cos_x) - (dest_z*sin_x); // rotate on the x axis
temp_z = (dest_z*cos_x) + (dest_y*sin_x);
dest_y = temp_y; dest_z = temp_z;

temp_z = (dest_z*cos_y) - (dest_x*sin_y); // rotate on the y axis
temp_x = (dest_x*cos_y) + (dest_z*sin_y);
dest_z = temp_z; dest_x = temp_x;

temp_x = (dest_x*cos_z) - (dest_y*sin_z); // rotate on the z axis
temp_y = (dest_y*cos_z) + (dest_x*sin_z);
dest_x = temp_x; dest_y = temp_y;

x_world = dest_x;
y_world = dest_y;
z_world = dest_z;

// convert to 2d for the screen
screen_x = global.center_w + global.scale * x_world / (z_world + global.distance)
screen_y = global.center_h + global.scale * y_world / (z_world + global.distance)

draw_point(screen_x,screen_y)
}
----------------------------Project Legend
Advertisement
Where is the camera rotation being considered?

Also another think I am unsure about is :
dest_x = global.cam_x-x+X[p_num]; // the point to rotate on
dest_y = global.cam_y-y+Y[p_num];
dest_z = global.cam_z-z+Z[p_num];

What are the -x -y -z values?
Right now it's not being considered, I have tried putting it in the code, but it always makes wierd curves and twists.

dest_x = global.cam_x-x-X[p_num];
the global.cam_x is the camera's x, the -x is the object's x location, and X[] is the x value of the point relative to the object.
----------------------------Project Legend

Hello, I'd suggest learning a bit about matrices. You can accomplish the described things easily and with pretty clean close.

Best regards
For camera movement to work right, you need to consider all four variables: object rotation, object movement, camera rotation, and camera movement.

Check out my simple camera article for how to create a working camera with object movement in OpenGL. If you're using DirectX, it's slightly easier, because there's one matrix for the object, and one for the camera.
enum Bool { True, False, FileNotFound };
I don't want to use matrices, mainly because the language I am creating this in doesn't support matrices very well. I would rather learn the math, then apply it myself. This is a 2d environment, so I have to write all of the code from scratch. All I'm asking for is a few formulas.
----------------------------Project Legend
Any language that supports structs, or at a push arrays, 'supports' matrices (in that you can easily write a few functions to do matrix manipulation). However, orientation angles for a camera are a reasonable way to go.

Here's my projection code to turn a 3D point and camera orientation into screen coords (C#):
  public ProjectionResult Project(Vector v, Vector viewTarget, double maxX, double maxY, int viewX, int viewY){   Vector ds = Vector.Add(viewTarget, ViewFrom, 1, -1);   double yaw = Math.Atan2(ds.X, ds.Y);   double pitch = Math.Atan2(ds.Z, Math.Sqrt((ds.X * ds.X) + (ds.Y * ds.Y)));   return Project(v, yaw, pitch, 0, maxX, maxY, viewX, viewY);  }  public ProjectionResult Project(Vector v, double yaw, double pitch, double roll, double maxX, double maxY, int viewX, int viewY){   Vector ds = Vector.Add(v, ViewFrom, 1, -1);      // Unrotate ds to take account of our view angles   ds = Rotate(ds, Axis.Z, -yaw);   ds = Rotate(ds, Axis.X, pitch);      double theta = Math.Atan2(ds.Z, Math.Sqrt((ds.X * ds.X) + (ds.Y * ds.Y)));   double phi = Math.Atan2(ds.X, ds.Y);      if(phi < Math.PI) phi += 2 * Math.PI;   if(phi > Math.PI) phi -= 2 * Math.PI;      if((Math.Abs(phi) > (Math.PI * 0.5)) || (Math.Abs(theta) > (Math.PI * 0.5)))    return new ProjectionResult(0, 0, false);      double x = Math.Tan(phi);   double y = Math.Tan(theta);      //if((Math.Abs(x) > maxX) || (Math.Abs(y) > maxY))   // return new ProjectionResult(0, 0, false);      return new ProjectionResult((float)(((x / maxX) + 0.5) * viewX), (float)(((-y / maxY) + 0.5) * viewY), true);  }


It doesn't handle camera roll, but that's just a manipulation of the screen coordinates at the end.

Oh yeah, my Projection class stores the camera origin in ViewFrom, which is why it's not passed in as an argument.
That seems to be right, but I'm just using plain 360 degrees in seperate variables, not vectors, with Cartesian coordinates. I'd just like some plain math, thanks.
----------------------------Project Legend
The only 'vector maths' in that code is assigning ds, which is just the position of the point in question relative to the camera (you can just split it out into three variables if you must, although I really think a struct { int x,y,z} t_vector at the least would be a good idea), and the Rotate function which I forgot to include, sorry about that:

  public static Vector Rotate(Vector v, Axis axis, double value){   switch(axis){    case Axis.X:     return new Vector(v.X,                       ((v.Y * Math.Cos(value)) + (v.Z * Math.Sin(value))),                       ((v.Z * Math.Cos(value)) - (v.Y * Math.Sin(value)))                      );    case Axis.Y:     return new Vector(((v.X * Math.Cos(value)) + (v.Z * Math.Sin(value))),                       v.Y,                       ((v.Z * Math.Cos(value)) - (v.X * Math.Sin(value)))                      );    case Axis.Z:     return new Vector(((v.X * Math.Cos(value)) + (v.Y * Math.Sin(value))),                       ((v.Y * Math.Cos(value)) - (v.X * Math.Sin(value))),                       v.Z);    default:     throw new ArgumentException("Axis not valid");   }  }


Okay, in basic maths what you do is as follows:

  • First, translate the target point into the camera's space: ds=r-ocamera
  • Then 'unrotate' the target point by the camera's orientation angles. That's what the Rotate function above does, and your code there looks equivalent.
  • Then do the simple projection, since the point is now in a simple untranformed coordinate system.
There is a reason why people are using matrices and vectors to answer your questions. You need to learn how to use them to make games. Its that simple. Because no matter what part of the game you work on, you will need to use them. AI, physics, graphics engines, all use math data structurs. I know it may seem like a pain at first, but these are the 'plain math' you are looking for. The power of matrices is that you can store translation, rotation, scale, shear, reflection, and projection all in 16 floats. Matrix form also allows for the fastest possible way to apply all these things (which any 3d engine needs) to a set of vertices as fast as possible. Again, I reiterate: you need to learn. You may not want to learn them, you may hate math, but in the end your engine will only be a pain in your arse and not fast or functional without them.

A really quick example is using 3d models in your game. Suppose you have a herd of cows (like 15 of them) standing in a field. You could have 15 copies of your cow model with vertices transformed into 3d world coordinates so that each vertex is EXACTLY where it should be in the world. It would work well enough, for that small scenario. But every time a cow moved or turned you would have to recalculate ALL its vertices, which would be a pain and very slow.
Now instead, imagine you had only 1 copy of your cow model, and 15 matrices that described the cows' positions and rotations. Now, you are using MUCH less space. Also, when a cow moves or turns, you only have to modify AT MOST 16 floats, though most often it won't even be that many!

The clincher for using matrices in your case is that the world, view, and projection matrices can all be combined into one matrix that will take a set of vertices (a cow for instance) and put it into your game world, then decide where it is relative to your camera and project it onto your screen with no more work than any of the three would require seperatly.

Finally, graphics cards have hardware (GPU) that will multiply matrices and perform transformations much faster than your CPU. To take advantage of this, you simply pass the card your vertices and a matrices and it will do the transformations very fast, which lets your CPU do other stuff in the meantime (like AI and physics and networking).

This topic is closed to new replies.

Advertisement