• Advertisement
Sign in to follow this  

3d translation

This topic is 4284 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am writing a simple 3d engine from scratch. I have the rotation of the object working, it will rotate around it's orgin. But it will always stay in the center of the screen, no matter what direction the camera is pointing. This is my rendering code:
// p_num is the point number
// it will go through all of the points in the model and draw them
for(p_num=0; p_num<points; p_num+=1)
{
dest_x = global.cam_x-x+X[p_num]; // the point to rotate on
dest_y = global.cam_y-y+Y[p_num];
dest_z = global.cam_z-z+Z[p_num];

temp_y = (dest_y*cos_x) - (dest_z*sin_x); // rotate on the x axis
temp_z = (dest_z*cos_x) + (dest_y*sin_x);
dest_y = temp_y; dest_z = temp_z;

temp_z = (dest_z*cos_y) - (dest_x*sin_y); // rotate on the y axis
temp_x = (dest_x*cos_y) + (dest_z*sin_y);
dest_z = temp_z; dest_x = temp_x;

temp_x = (dest_x*cos_z) - (dest_y*sin_z); // rotate on the z axis
temp_y = (dest_y*cos_z) + (dest_x*sin_z);
dest_x = temp_x; dest_y = temp_y;

x_world = dest_x;
y_world = dest_y;
z_world = dest_z;

// convert to 2d for the screen
screen_x = global.center_w + global.scale * x_world / (z_world + global.distance)
screen_y = global.center_h + global.scale * y_world / (z_world + global.distance)

draw_point(screen_x,screen_y)
}

Share this post


Link to post
Share on other sites
Advertisement
Where is the camera rotation being considered?

Also another think I am unsure about is :
dest_x = global.cam_x-x+X[p_num]; // the point to rotate on
dest_y = global.cam_y-y+Y[p_num];
dest_z = global.cam_z-z+Z[p_num];

What are the -x -y -z values?

Share this post


Link to post
Share on other sites
Right now it's not being considered, I have tried putting it in the code, but it always makes wierd curves and twists.

dest_x = global.cam_x-x-X[p_num];
the global.cam_x is the camera's x, the -x is the object's x location, and X[] is the x value of the point relative to the object.

Share this post


Link to post
Share on other sites

Hello, I'd suggest learning a bit about matrices. You can accomplish the described things easily and with pretty clean close.

Best regards

Share this post


Link to post
Share on other sites
For camera movement to work right, you need to consider all four variables: object rotation, object movement, camera rotation, and camera movement.

Check out my simple camera article for how to create a working camera with object movement in OpenGL. If you're using DirectX, it's slightly easier, because there's one matrix for the object, and one for the camera.

Share this post


Link to post
Share on other sites
I don't want to use matrices, mainly because the language I am creating this in doesn't support matrices very well. I would rather learn the math, then apply it myself. This is a 2d environment, so I have to write all of the code from scratch. All I'm asking for is a few formulas.

Share this post


Link to post
Share on other sites
Any language that supports structs, or at a push arrays, 'supports' matrices (in that you can easily write a few functions to do matrix manipulation). However, orientation angles for a camera are a reasonable way to go.

Here's my projection code to turn a 3D point and camera orientation into screen coords (C#):
  public ProjectionResult Project(Vector v, Vector viewTarget, double maxX, double maxY, int viewX, int viewY){
Vector ds = Vector.Add(viewTarget, ViewFrom, 1, -1);
double yaw = Math.Atan2(ds.X, ds.Y);
double pitch = Math.Atan2(ds.Z, Math.Sqrt((ds.X * ds.X) + (ds.Y * ds.Y)));
return Project(v, yaw, pitch, 0, maxX, maxY, viewX, viewY);
}
public ProjectionResult Project(Vector v, double yaw, double pitch, double roll, double maxX, double maxY, int viewX, int viewY){
Vector ds = Vector.Add(v, ViewFrom, 1, -1);

// Unrotate ds to take account of our view angles
ds = Rotate(ds, Axis.Z, -yaw);
ds = Rotate(ds, Axis.X, pitch);

double theta = Math.Atan2(ds.Z, Math.Sqrt((ds.X * ds.X) + (ds.Y * ds.Y)));
double phi = Math.Atan2(ds.X, ds.Y);

if(phi < Math.PI) phi += 2 * Math.PI;
if(phi > Math.PI) phi -= 2 * Math.PI;

if((Math.Abs(phi) > (Math.PI * 0.5)) || (Math.Abs(theta) > (Math.PI * 0.5)))
return new ProjectionResult(0, 0, false);

double x = Math.Tan(phi);
double y = Math.Tan(theta);

//if((Math.Abs(x) > maxX) || (Math.Abs(y) > maxY))
// return new ProjectionResult(0, 0, false);

return new ProjectionResult((float)(((x / maxX) + 0.5) * viewX), (float)(((-y / maxY) + 0.5) * viewY), true);
}




It doesn't handle camera roll, but that's just a manipulation of the screen coordinates at the end.

Oh yeah, my Projection class stores the camera origin in ViewFrom, which is why it's not passed in as an argument.

Share this post


Link to post
Share on other sites
That seems to be right, but I'm just using plain 360 degrees in seperate variables, not vectors, with Cartesian coordinates. I'd just like some plain math, thanks.

Share this post


Link to post
Share on other sites
The only 'vector maths' in that code is assigning ds, which is just the position of the point in question relative to the camera (you can just split it out into three variables if you must, although I really think a struct { int x,y,z} t_vector at the least would be a good idea), and the Rotate function which I forgot to include, sorry about that:

  public static Vector Rotate(Vector v, Axis axis, double value){
switch(axis){
case Axis.X:
return new Vector(v.X,
((v.Y * Math.Cos(value)) + (v.Z * Math.Sin(value))),
((v.Z * Math.Cos(value)) - (v.Y * Math.Sin(value)))
);
case Axis.Y:
return new Vector(((v.X * Math.Cos(value)) + (v.Z * Math.Sin(value))),
v.Y,
((v.Z * Math.Cos(value)) - (v.X * Math.Sin(value)))
);
case Axis.Z:
return new Vector(((v.X * Math.Cos(value)) + (v.Y * Math.Sin(value))),
((v.Y * Math.Cos(value)) - (v.X * Math.Sin(value))),
v.Z);
default:
throw new ArgumentException("Axis not valid");
}
}



Okay, in basic maths what you do is as follows:

  • First, translate the target point into the camera's space: ds=r-ocamera
  • Then 'unrotate' the target point by the camera's orientation angles. That's what the Rotate function above does, and your code there looks equivalent.
  • Then do the simple projection, since the point is now in a simple untranformed coordinate system.

Share this post


Link to post
Share on other sites
There is a reason why people are using matrices and vectors to answer your questions. You need to learn how to use them to make games. Its that simple. Because no matter what part of the game you work on, you will need to use them. AI, physics, graphics engines, all use math data structurs. I know it may seem like a pain at first, but these are the 'plain math' you are looking for. The power of matrices is that you can store translation, rotation, scale, shear, reflection, and projection all in 16 floats. Matrix form also allows for the fastest possible way to apply all these things (which any 3d engine needs) to a set of vertices as fast as possible. Again, I reiterate: you need to learn. You may not want to learn them, you may hate math, but in the end your engine will only be a pain in your arse and not fast or functional without them.

A really quick example is using 3d models in your game. Suppose you have a herd of cows (like 15 of them) standing in a field. You could have 15 copies of your cow model with vertices transformed into 3d world coordinates so that each vertex is EXACTLY where it should be in the world. It would work well enough, for that small scenario. But every time a cow moved or turned you would have to recalculate ALL its vertices, which would be a pain and very slow.
Now instead, imagine you had only 1 copy of your cow model, and 15 matrices that described the cows' positions and rotations. Now, you are using MUCH less space. Also, when a cow moves or turns, you only have to modify AT MOST 16 floats, though most often it won't even be that many!

The clincher for using matrices in your case is that the world, view, and projection matrices can all be combined into one matrix that will take a set of vertices (a cow for instance) and put it into your game world, then decide where it is relative to your camera and project it onto your screen with no more work than any of the three would require seperatly.

Finally, graphics cards have hardware (GPU) that will multiply matrices and perform transformations much faster than your CPU. To take advantage of this, you simply pass the card your vertices and a matrices and it will do the transformations very fast, which lets your CPU do other stuff in the meantime (like AI and physics and networking).

Share this post


Link to post
Share on other sites
Well, considering that, it might be eaiser to go with something else. So should I go with vectors of quaternions? I've heard that quaternions uses less data, but I'm not sure. And since vectors use homogeneous coordinates, what is the difference between homogeneous and cartesian? Thanks.

Share this post


Link to post
Share on other sites
A vector is a collection of related co-ordinates in a given n-dimensional space. E.g. a 3D space you may say has a dimension "width", a dimension "height", and a dimension "depth". To describe a location in this space, you have to say where in the "width", where in the "height", and where in the "depth" you are. This 3 "where" are the co-ordinates of the location in that space. Together they are written as a vector.

Now, you could have different spaces. Obviously the spaces may differ in their count of dimensions. But they may also differ in the mathematics defined to deal with them. The space we're dealing with in 3D gfx normally is a 3D Euclidean space, what means that it provides directions, locations, and distances between points defined by the well known sqrt(sum(co-ordinates^2). Due to this features, rotations (directional changes), translation (location changes), and scaling (distance changes) are possible in this space.

If a space provides both locations and directions, then how to differ between them if both are represented by vectors? You can differ between them from a vector algebra point-of-view like so:
point - point = direction
direction + direction = direction
and derived from that
point + direction = point
Say, the relationship between locations and directions in a space could be given by rules. A rule like
point + point
for example would be undefined, since there is no meaningful geometric interpretation (okay, one could in fact define a sense also here ;)

However, until now its just a question of interpretation, since from only looking at a vector you could not see whether it is a location or else a direction. Here comes the homogeneous co-ordinate into play.

You could expand your space into another dimension, called the homogenous dimension, by adding another co-ordinate. For each fixed given value of the homogeneous co-ordinate your previous space is an aligned hyperplane in the new space. You could find a pair of hyperplanes that fulfil the vector algebra above, by selecting the w==0 hyperplane for directions and the w==1 hyperplane for locations. Here is a simple proof:
(point,1) - (point,1) = (direction,0)
(direction,0) + (direction,0) = (direction,0)
and derived from that
(point,1) + (direction,0) = (point,1)

Quaternions are also vectors but in a totally different space. The space of quaternions is 4D with 1 real axis and 3 "imaginary" axes. I would suggest you to ignore this for now and look at quaternions simply as a possibility to describe a rotation. Perhaps it is best you do rotations with matrices first before looking at quaternions.

Next, the Cartesian co-ordinate system is just a special kind of Euclidean space. AFAIK it is an Euclidean space with 3D, straight axis, and the right-hand-rule (what defines an order of the 3 axes), but I may be wrong here. Perhaps you mean the "affine" space instead of the Cartesian space? In general, if you cancel the homogenous co-ordinate you reduce the space to an affine space.

As you can see here, homogenous vectors, cartesian vectors, and quaternions are much different things also all of them are vectors. A vector could be understood only if you know the space in which it is defined.

EDIT: So the question whether to use "vectors or quaternions" is not okay. If your question is "whether to use matrices or else quaternions for rotations" then the answer is: You could be happy with matrices alone, but not with quaternions alone. Quaternions, on the other hand use less memory and are more numerically stable. My suggestion is use both, whatever is better in a given situation.


Hopefully this a bit lengthy abstract has helped. If not, don't hesitate to ask.

[Edited by - haegarr on May 30, 2006 10:32:54 AM]

Share this post


Link to post
Share on other sites
Ok, that makes sense. I think I'll start with matrices. I've read up on vectors, so I know how they work. Where would be a good place to start learning about using matrices? Thanks.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement