camera matrices

Started by
8 comments, last by Dmytry 19 years, 3 months ago
I have a matrix based camera. WHen applying the matrixes to OGL, I expected to do the following in order:

//camera
matrix44 local=camera->local(); local.invert(); //localspace transforms
matrix44 world=camera->world(); world.invert(); //worldspace transforms
glMultMatrixd((const double*)&local[0][0]);
glMultMatrixd((const double*)&world[0][0]);

//object
glMultMatrixd((const double*)&object->local()[0][0]);
glMultMatrixd((const double*)&object->world()[0][0]);
However, it turns out experimentally that the inverts are incorrect- for it to render properly, you just submit the raw matrices. This works, but I want to get it "theoretically correct" before I start relying on these matrices. Whats wrong with my design? I thought the procedure was push camera inverse, push world, push object. However, to get it to work I had to use two matrices for each object and two for the camera, so that local rotations and scales are preserved. Incidentally, my scales are messed up. Can someone post the accepted procedure for this? Relevant source, if necessary
class TangibleObject : public Object
{
private:     
    matrix44 localScale;
    matrix44 localRotation;
    matrix44 localTranslation;
    matrix44 worldScale;
    matrix44 worldRotation;
    matrix44 worldTranslation;

    matrix44 _local;
    matrix44 _world;

    bool localUpdate;
    bool worldUpdate;

public:
    matrix44 local();
    matrix44 world();

    void scale_local     (const double magnitude);
    void rotate_local    (const vector3& axis, const double radians);
    void translate_local (const vector3& v);
    void scale_world     (const double magnitude);
    void rotate_world    (const vector3& axis, const double radians);
    void translate_world (const vector3& v);
/*...*/
};

TangibleObject::TangibleObject()
//{ netScale=netRotation=netTranslation=IdentityMatrix44(); update();}
{
    _local=_world=IdentityMatrix44();
    localScale=localRotation=localTranslation=IdentityMatrix44();
    worldScale=worldRotation=worldTranslation=IdentityMatrix44();
    localUpdate=worldUpdate=true;
}


void TangibleObject::scale_local(const double magnitude)
{ localScale *= ScaleMatrix44(magnitude,magnitude,magnitude); localUpdate=true; }

void TangibleObject::rotate_local(const vector3& axis, const double radians)
{ localRotation *= RotateRadMatrix44(axis, radians); localUpdate=true; }

void TangibleObject::translate_local(const vector3& v)
{ localTranslation *= TranslateMatrix44(v.x,v.y,v.z); localUpdate=true; }


void TangibleObject::scale_world(const double magnitude)
{ worldScale *= ScaleMatrix44(magnitude,magnitude,magnitude); worldUpdate=true; }

void TangibleObject::rotate_world(const vector3& axis, const double radians)
{ worldRotation *= RotateRadMatrix44(axis, radians); worldUpdate=true; }

void TangibleObject::translate_world(const vector3& v)
{ worldTranslation *= TranslateMatrix44(v.x,v.y,v.z); worldUpdate=true; }


void TangibleObject::scale(const double magnitude)
{ scale_local(magnitude); }

void TangibleObject::rotate(const vector3& axis, const double radians)
{ rotate_local(axis, radians); }

void TangibleObject::translate(const vector3& v)
{ translate_world(v); }


matrix44 TangibleObject::local() 
{
    if(localUpdate)
    {
        localUpdate=false;
        _local=localScale*localTranslation*localRotation;//*localScale;
    }
    return _local;
}

matrix44 TangibleObject::world() 
{
    if(worldUpdate)
    {
        worldUpdate=false;
        _world=worldScale*worldTranslation*worldRotation;//*worldScale;
    }
    return _world;
}

Advertisement
It's kind of matter of convention. You might store that camera matrix and might not need inverse. Everything depends to your other code that makes these matrices. "localspace transforms" and "worldspace transforms" is not really descriptive to me, but possibly that either for camera or for object your order is incorrect. Order should be different for them.

With camera, You need to first apply camera rotation, then apply camera translation, etc. , with object, you need to first apply object translation, then object rotation, etc.

As about accepted procedure... I use quaternion camera, and I don't load 2 matrices at each step. In fact i have quaternion coordinate system class, that have methods "load transform to" and "load transform from" , and use "load transform to" for camera and "load transform from" for object.

edit: and "However, to get it to work I had to use two matrices for each object and two for the camera": looks rather like you have wrong order of multiplications where you generate these matrices and you did it with right order in OpenGL....
To add to how fscked up this is, I can change the world() and local() functions and rearrange the order of concatenations and the following seem to be equivalent:
_local=localScale*localTranslation*localRotation;_local=localTranslation*localRotation*localScale;      _local=localScale*localRotation*localTranslation;_local=localTranslation*localScale*localRotation;_local=localRotation*localScale*localTranslation;


WTF, m8
hmm,

Translation*Rotation!=Rotation*Translation
if rotation and translation is not identity.

Maybe your matrix multiply is broken?

In any case, your bug is probably outside code you showed.
I'm using GPG mtxlib, its not broken :)
OK, how do you handle an object (or camera's) orientation without using two matrices (or quats)? When you do translations you typically mean worldspace, but when you do rotations you typically mean localspace. Also, the opposite needs to be possible- what if you want to rotate in worldspace? I don't understand how you can achieve this without keeping them seperate.

[edit] Additionally, do you keep your rotations, translations, and scales seperate, let them accumulate, then concatenate them right before submitting to OGL (like in the shown code local() method)? Another way that produced good results, but I don't understand why, is just multiplying the local matrix by the rotation matrix in the rotate method, instead of accumulating rotations seperate from everything else.
Is it possible to treat my camera exactly like a regular object (same exact rotation methods and same matrix concatenation) and only differ in the way it is sent to OGL?
You should only need one matrix for an object (going from object space to world space) and one matrix for the camera (going from world space to eye space).

Whether those are inverted or not is just a matter of implementation convention in your application. Most implementations don't. In fact, many implementations probably construct the matrix directly when needed from position and rotation values (and possibly scale).
enum Bool { True, False, FileNotFound };
Is your matrix44 class stored column-major?

You pass the matrix data directly to glMultMatrix so OpenGL is going to assume that m[0][0] - m[0][4] represent the x-axis of the camera's coordinate system. This is opposite to how most people store their matrices.

Also, you don't "push camera inverse, push world, push object". You just "push camera inverse, push object"... unless you're not storing your camera in world coordinates.
Quote:Original post by thedustbustr
OK, how do you handle an object (or camera's) orientation without using two matrices (or quats)? When you do translations you typically mean worldspace, but when you do rotations you typically mean localspace. Also, the opposite needs to be possible- what if you want to rotate in worldspace? I don't understand how you can achieve this without keeping them seperate.

[edit] Additionally, do you keep your rotations, translations, and scales seperate, let them accumulate, then concatenate them right before submitting to OGL (like in the shown code local() method)? Another way that produced good results, but I don't understand why, is just multiplying the local matrix by the rotation matrix in the rotate method, instead of accumulating rotations seperate from everything else.

My "coordinate system" class contains vector and quaternion.

With matrices, to rotate camera around local Y axis, i would do that:

camera_coordinate_system=camera_coordinate_system*my_turn;

I have overloaded * on coordinate systems that works exactly as for 4x4 matrices.

If i want to rotate camera around global y axis, i do that:
camera_coordinate_system=my_turn*camera_coordinate_system;

It will work same way with matrices if you load inverse into opengl. Otherwise, if you store matrix that should be loaded as is, reverse order of multiplications and use inverse turn (because (AB)-1=B-1A-1).

If i want to rotate camera around global Y axis without moving it, i do that:
camera_coordinate_system.orientation=my_turn*camera_coordinate_system.orientation;
where orientation may be either matrix or quaternion, works same way, at least with conventions i use.

edit: as ajas95 have pointed, there is probably something wrong with column/row major.
It is possible that your matrix lib works in directX-like way, so you'll need to do all matrix multiplications in different order.

It is possible to screw up with matrices in really many different ways. Just like with any other math thing.

This topic is closed to new replies.

Advertisement