Camera positioning

Started by
2 comments, last by meeshoo 12 years, 2 months ago
Hi, I have been thinking for a few days on how to properly create camera matrix so I could specify it just as any other model in scene and the view would still be from camera object.
I use these methods to get camera matrix:
public static float[] getViewMatrix(float[] position, float[] rotation) {
float[] matTrans = Matrix3D.copy(position);
float[] matRot = Matrix3D.copy(rotation);
// Transpose rotation matrix.
float tmp;
tmp = matRot[0 * 4 + 1]; matRot[0 * 4 + 1] = matRot[1 * 4 + 0]; matRot[1 * 4 + 0] = tmp;
tmp = matRot[0 * 4 + 2]; matRot[0 * 4 + 2] = matRot[2 * 4 + 0]; matRot[2 * 4 + 0] = tmp;
tmp = matRot[1 * 4 + 2]; matRot[1 * 4 + 2] = matRot[2 * 4 + 1]; matRot[2 * 4 + 1] = tmp;
// Invert translation.
matTrans[3 * 4 + 0] *= -1.0f;
matTrans[3 * 4 + 1] *= -1.0f;
matTrans[3 * 4 + 2] *= -1.0f;
return Matrix3D.multiply(matRot, matTrans);
}

public static final float[] multiply(float[] b, float[] a) {
// Return identity if matrices aren't 4x4.
if (a.length != b.length && a.length != 16)
return identity();
// Initialize empty matrix.
float[] result = new float[16];
// Multiply.
int k = 15;
for (int i = 3; i >= 0; i--) {
for (int j = 3; j >= 0; j--) {
result[k] += a[i * 4] * b[j];
result[k] += a[i * 4 + 1] * b[4 + j];
result[k] += a[i * 4 + 2] * b[8 + j];
result[k] += a[i * 4 + 3] * b[12 + j];
k--;
}
}
return result;
}

position and rotation float arrays are matrices.
The problem is, that viewing arround doesn't quite work on some directions unless I comment these lines:
tmp = matRot[0 * 4 + 1]; matRot[0 * 4 + 1] = matRot[1 * 4 + 0]; matRot[1 * 4 + 0] = tmp;
tmp = matRot[0 * 4 + 2]; matRot[0 * 4 + 2] = matRot[2 * 4 + 0]; matRot[2 * 4 + 0] = tmp;
tmp = matRot[1 * 4 + 2]; matRot[1 * 4 + 2] = matRot[2 * 4 + 1]; matRot[2 * 4 + 1] = tmp;


Any ideas what I'm doing wrong? This is on android.
Multiply method is tested and is working.
Thank you, Martin.

EDIT:
if I understand correctly, I need to inverse my camera position and rotation matrix to be able to get view from where camera is positioned with original matrix. Is there something wrong with my inverse matrix implementation?
Advertisement
I'm a little bit rusty here so take this with a little bit of skepticism if I'm incorrect.

To get the view matrix from the position of an object, I believe you want the inverse of that object's model matrix. With typical opengl notation (translate (T) than rotate (R)), you'd get the model matrix by M = R*T. To get the inverse of M (M[sup]-1[/sup]), you would find (R*T)[sup]-1[/sup], or (T[sup]-1[/sup]*R[sup]-1[/sup]). You are finding T[sup]-1[/sup] and R[sup]-1[/sup], but it looks like you're multiplying (R[sup]-1[/sup]*T[sup]-1[/sup]) instead of (T[sup]-1[/sup]*R[sup]-1[/sup]) (if I'm understanding your matrix multiply arguments correctly).

Long story short, maybe you should do
Matrix3D.multiply(matTrans, matRot);

[font=arial,helvetica,sans-serif]instead?[/font]
[size=2]My Projects:
[size=2]Portfolio Map for Android - Free Visual Portfolio Tracker
[size=2]Electron Flux for Android - Free Puzzle/Logic Game
Hi, karwosts, unfortunately it doesn't work that way, camera just starts rotating around one point.
I forgot a lot of this, but these tutorials might help a bit: http://www.arcsynthesis.org/gltut/Positioning/Tutorial%2008.html.
transpose the rotation of the matrix. Then extract the right up and look vector. for the last row which is the camera position just take the -Dot(right,position),-Dot(up,position) and -Dot(look,position) for the x,y,z of the camera where position refers to the last row of your matrix.

This topic is closed to new replies.

Advertisement