Public Group

# How to properly align a weapon model with a first-person camera

This topic is 555 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

The tools I am using are: gcc, opengl es 2 and glm.

I am trying to align a weapon model with the first-person camera. The main issue is that I can not seem to figure out how to "anchor the weapon to the camera" by utilizing the camera space and offsetting and rotating the object so it fits like in an FPS-game.

With the help of GDN, I created this camera class:

[spoiler]

#pragma once

#include <GLES2/gl2.h>

#include "deps/glm/glm.hpp"
#include "deps/glm/gtc/matrix_transform.hpp"

#define DEFAULT_YAW -90.f
#define DEFAULT_PITCH 0.f
#define DEFAULT_SPEED 15.f
#define DEFAULT_SCROLL_SPEED 0.04f
#define DEFAULT_SENSE .24f
#define DEFAULT_FOV_Y 45.f
#define DEFAULT_RATIO 16.f/9.f
#define DEFAULT_NEAR .1f
#define DEFAULT_FAR 1024.f

#define MIN_FOV 44.3f
#define MAX_FOV DEFAULT_FOV_Y
#define MIN_PITCH -89.f
#define MAX_PITCH 89.f

class Camera
{
public:
Camera(glm::vec3 pos = default_pos, glm::vec3 up = default_up) :
m_Position(pos),
m_WorldUp(up)
{
UpdateVectors(); // Otherwise glm::LookAt will complain.
}
protected:
glm::vec3 m_Position; // Set by ctor.
glm::vec3 m_WorldUp; // Set by ctor.

glm::vec3 m_Target; // Set by UpdateVectors().
glm::vec3 m_Right; // Set by UpdateVectors().
glm::vec3 m_Up; // Set by UpdateVectors().

GLfloat m_Ratio = DEFAULT_RATIO;
GLfloat m_FOV = DEFAULT_FOV_Y;
GLfloat m_FOVmin = MIN_FOV;
GLfloat m_FOVmax = MAX_FOV;

GLfloat m_Yaw = DEFAULT_YAW;
GLfloat m_Pitch = DEFAULT_PITCH;
GLfloat m_PitchMin = MIN_PITCH;
GLfloat m_PitchMax = MAX_PITCH;

GLfloat m_MovementSpeed = DEFAULT_SPEED;
GLfloat m_ScrollSpeed = DEFAULT_SCROLL_SPEED;
GLfloat m_MouseSenseX = DEFAULT_SENSE;
GLfloat m_MouseSenseY = DEFAULT_SENSE;
public:
glm::mat4 View() const { return glm::lookAt(m_Position, m_Target + m_Position, m_Up); }
glm::mat4 Projection() const { return glm::perspective(m_FOV, m_Ratio, DEFAULT_NEAR, DEFAULT_FAR); }

glm::mat4 ViewProjection() const { return Projection() * View(); }

glm::vec3 const& Position() const { return m_Position; }
void Position(glm::vec3 pos) { m_Position = pos; }

glm::vec3 Target() const { return m_Target; }
void Target(GLfloat x, GLfloat y, GLfloat z) { m_Target = glm::vec3(x, y, z); }

glm::vec3 Right() const { return m_Right; }
glm::vec3 WorldUp() const { return m_WorldUp; }
glm::vec3 Up() const { return m_Up; }

GLfloat const& GetRatio() const { return m_Ratio; }
GLfloat const& GetFOV() const { return m_FOV; }
GLfloat const& GetYaw() const { return m_Yaw; }
GLfloat const& GetPitch() const { return m_Pitch; }

void SetYaw(GLfloat yaw) { m_Yaw = yaw; }
void SetPitch(GLfloat pitch) { m_Pitch = pitch; }

void MoveForward(GLfloat time_delta) { m_Position += m_Target*m_MovementSpeed*time_delta; }

void MoveBack(GLfloat time_delta) { m_Position -= m_Target*m_MovementSpeed*time_delta; }

void MoveLeft(GLfloat time_delta){ m_Position -= m_Right*m_MovementSpeed*time_delta; }

void MoveRight(GLfloat time_delta) { m_Position += m_Right*m_MovementSpeed*time_delta; }

void MoveUp(GLfloat time_delta) { m_Position += m_WorldUp*m_MovementSpeed*time_delta; }

void MoveDown(GLfloat time_delta) { m_Position -= m_WorldUp*m_MovementSpeed*time_delta; }

void Yaw(GLfloat offset) { m_Yaw += offset*m_MouseSenseX; }

void Pitch(GLfloat offset)
{
m_Pitch += offset*m_MouseSenseY;

if (m_Pitch > m_PitchMax)
{
m_Pitch = m_PitchMax;
}
else if (m_Pitch < m_PitchMin)
{
m_Pitch = m_PitchMin;
}
}

void Zoom(GLfloat y)
{
if (m_FOV >= m_FOVmin && m_FOV <= m_FOVmax)
{
m_FOV -= y*m_ScrollSpeed;
}
if (m_FOV > m_FOVmax)
{
m_FOV = m_FOVmax;
}
if (m_FOV < m_FOVmin)
{
m_FOV = m_FOVmin;
}
}

void UpdateVectors()
{
glm::vec3 front;

m_Target = glm::normalize(front);
m_Right = glm::normalize(glm::cross(m_Target, m_WorldUp));
m_Up = glm::normalize(glm::cross(m_Right, m_Target));
}
}; 

[/spoiler]

All my models inherit from the Transformable class, which I use to create the model matrix, dubbed Transform(), take a look:

[spoiler]

#pragma once

#include <GLES2/gl2.h>

#include "deps/glm/gtc/matrix_transform.hpp"

class Transformable
{
public:
Transformable() {}

Transformable(glm::vec3 const& pos, glm::vec3 const& size)
{
Translate(pos.x, pos.y + size.y/2.f, pos.z); // Place on the y-coordinate.
Scale(size); // Set initial size.
}
protected:
glm::mat4 m_TransformTranslation;
glm::mat4 m_TransformScaling;

glm::mat4 m_TransformRotationX;
glm::mat4 m_TransformRotationY;
glm::mat4 m_TransformRotationZ;

glm::vec3 m_Orientation;
public:
void Position(glm::vec3 const& pos) { m_TransformTranslation = glm::translate(glm::mat4(1.f), pos); }

glm::vec4 const& Position() const { return m_TransformTranslation[3]; }
glm::mat4 Rotation() const { return m_TransformRotationX * m_TransformRotationY * m_TransformRotationZ; }

GLfloat X() const { return Position().x; }
GLfloat Y() const { return Position().y; }
GLfloat Z() const { return Position().z; }

GLfloat Width() const { return m_TransformScaling[0].x; }
GLfloat Height() const { return m_TransformScaling[1].y; }
GLfloat Length() const { return m_TransformScaling[2].z; }

void OrientX(GLfloat deg) { m_Orientation.x = deg; m_TransformRotationX = glm::rotate(glm::mat4(1), m_Orientation.x, glm::vec3(1, 0, 0)); }
void OrientY(GLfloat deg) { m_Orientation.y = -deg; m_TransformRotationY = glm::rotate(glm::mat4(1), m_Orientation.y, glm::vec3(0, 1, 0)); }
void OrientZ(GLfloat deg) { m_Orientation.z = deg; m_TransformRotationZ = glm::rotate(glm::mat4(1), m_Orientation.z, glm::vec3(0, 0, 1)); }

void Translate(glm::vec3 const& vec) { m_TransformTranslation = glm::translate(m_TransformTranslation, vec); }
void Scale(glm::vec3 const& vec) { m_TransformScaling = glm::scale(glm::mat4(1), vec); }

void Translate(GLfloat x, GLfloat y, GLfloat z) { Translate(glm::vec3(x, y, z)); }
void Scale(GLfloat x, GLfloat y, GLfloat z) { Scale(glm::vec3(x, y, z)); }

glm::mat4 Transform() const { return m_TransformTranslation * Rotation() * m_TransformScaling; }
}; 

[/spoiler]

I do all the offsetting, yawing and pitching in the player class, which holds the model for the weapon.

[spoiler]

#pragma once

#include "camera.h"
#include "model.h"

class Player
{
public:
Player(Model *weapon) :
m_Camera(new Camera),
m_Weapon(weapon)
{}

~Player()
{
delete m_Camera;
}
private:
Camera *m_Camera;
Model *m_Weapon;

glm::vec3 m_Position;
glm::vec3 m_Rotation;

GLfloat m_MovementSpeed = 100.f;
GLfloat m_MouseSenseX = 1;
GLfloat m_MouseSenseY = 1;
public:
Camera* PlayerCamera() const { return m_Camera; }
Model* GetWeaponModel() const { return m_Weapon; }

void Update()
{
m_Camera->Position(m_Position);
m_Camera->Update();

glm::vec3 weapon_offset_y = glm::vec3(0, 10, 0);

m_Weapon->Position(m_Position + weapon_offset_y);
m_Weapon->OrientY(m_Camera->GetYaw() + 30);
m_Weapon->OrientZ(m_Camera->GetPitch() + 30);
m_Weapon->OrientX(20);
}

void Position(glm::vec3 pos) { m_Position = pos; Update(); }
void SetYaw(GLfloat deg) { m_Camera->SetYaw(deg); Update(); }

void MoveForward(GLfloat delta) { m_Position += m_Camera->Target() * m_MovementSpeed * delta; Update(); }
void MoveBack(GLfloat delta) { m_Position -= m_Camera->Target() * m_MovementSpeed * delta; Update(); }
void MoveLeft(GLfloat delta) { m_Position -= m_Camera->Right() * m_MovementSpeed * delta; Update(); }
void MoveRight(GLfloat delta) { m_Position += m_Camera->Right() * m_MovementSpeed * delta; Update(); }

void Yaw(GLfloat delta) { m_Camera->Yaw(m_MouseSenseX * delta); Update(); }
void Pitch(GLfloat delta) { m_Camera->Pitch(m_MouseSenseX * delta); Update(); }
}; 

[/spoiler]

As you can see, I am struggling with the model matrix, which is currently an RTS-matrix and the camera provides a projection*view matrix with ViewProjection(). I multiply both and pass it as a uniform to the shader.

Although It seems that I am doing the offsets and rotations wrong, because when I move the camera, the weapon model moves in arcs and shifts weird.

Any suggestions and advice is much appreciated!

Edited by Byrkoet

##### Share on other sites

I think it's a matter of order. You want to position the weapon at the camera, rotate, then offset in the direction the camera is facing.

So some psudocode would look like

m_Weapon->Position(m_Camera->GetPosition());
m_Weapon->OrientY(m_Camera->GetYaw() + 30);
m_Weapon->OrientZ(m_Camera->GetPitch() + 30);
weapon_position += m_Camera->getForward() * weapon_offset;

##### Share on other sites

I have asked the question on stackexchange, and someone recommended I draw the weapon, like you would a skybox. Although I tried using the following MVP matrix, but to no avail:

glm::perspective(45.f, 16.f/9.f, 0.1f, 100.f) * m_Weapon->Transform()

For the skybox I simply cut off the translation by casting to a glm::mat3, but then, for the weapon, I figured I could simply add perspective and offset to the weapon in the x- or z-axis. Am I missing the point here?

Also, is the method of rotation that I am using viable? Meaning, does getting the rotation by multiplying the rotated axes seperately, in the order, rotation_x, rotation_y and rotation_z actually work well?

Edited by Byrkoet

##### Share on other sites

To attach a weapon to the camera. Get the view matrix. Invert it to turn it into a world/object matrix. Multiply it times the weapon's world/object matrix. And use the results instead of the weapon's world/object matrix to draw it. This will make the weapon a child of the camera and thus attach it. It will make the weapon's position and orientation relative to the camera. You may have to reverse the order of multiplication if it doesn't work right. Order of multiplication will determine if it is a local or global axis.

Although, what you probably want is a "pivot point"/"attachment point" matrix in between the two so that you can control the point relative to the camera where the weapon is actually attached and even rotate the pivot point to rotate the weapon rather than rotating the weapon itself. That would be inverse view times the pivot point matrix times the weapon's world/object matrix and use the result to draw the weapon. Again, you might have to reverse the order of multiplication to get the desired result. The pivot point becomes an origin relative to the camera's position and orientation which is relative to the world origin.And the weapon becomes relative to the pivot point.

That's similar to what I have going on in this code:

        //Yellow cube controls.
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_I && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
Cube.Transform(glm::translate(glm::mat4(), glm::vec3(0.0f, 0.0f, 0.05f)));
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_K && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
Cube.Transform(glm::translate(glm::mat4(), glm::vec3(0.0f, 0.0f, -0.05f)));
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_L && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_J && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)

if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_Y && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
Cube2Pivot =   glm::rotate(glm::mat4(), glm::radians<float>(1), glm::vec3(0.0f, 0.0f, 1.0f)) * Cube2Pivot;
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_H && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
Cube2Pivot = glm::rotate(glm::mat4(), glm::radians<float>(-1), glm::vec3(0.0f, 0.0f, 1.0f)) * Cube2Pivot;

Cube2Pivot = Cube2Pivot * glm::rotate(glm::mat4(), glm::radians<float>(1), glm::vec3(0.0f, 1.0f, 0.0f));
Cube2.WorldMatrix = Cube.WorldMatrix * Cube2Pivot * Cube2World;


There I have a cube attached to another cube (rather than the camera) through a pivot/attachment point. So, every frame I set the second cube's world/object matrix equal to the primary cube's times the pivot's times the matrix I maintain for the second cube. Cube2World is the matrix I change for the attached object and Cube2.WorldMatrix is what I actually use to draw the second cube.

You can make chains with as many links in them as you want such as this. Each link is relative to the last.

Edited by BBeck

##### Share on other sites

apply the transform relative to the camera, then apply the camer's worldmat, then draw.

if the camera is at the player's location in the world, with the player's rotation, and 5.5 feet off the ground,

and the weapon is 2 feet in front of and 2 feet below the camera facing the same way as the player, move the gun -2y, +2z, then apply the camrea's transform to the result, then draw. done deal.

note that inverting a mat can cause float errors. never invert a mat.

Edited by Norman Barrows

##### Share on other sites

In this case, the inverted matrix only exists for a single frame. Any discrepancies in the math would generally only exist for less than a 60th of a second before the inverted matrix is thrown away and completely rebuilt from scratch in the next frame.

But you can generally avoid inverting by simply understanding that all the math must be done "backwards" on a view matrix, which is really all the inversion is doing; instead of doing the math backwards, inverting it makes the matrix backwards so that "forward running" math works on it. Kind of like adding a positive number to a negative number compared to subtracting a number from another number, same end result.

##### Share on other sites

To attach a weapon to the camera. Get the view matrix. Invert it to turn it into a world/object matrix. Multiply it times the weapon's world/object matrix. And use the results instead of the weapon's world/object matrix to draw it. This will make the weapon a child of the camera and thus attach it. It will make the weapon's position and orientation relative to the camera. You may have to reverse the order of multiplication if it doesn't work right. Order of multiplication will determine if it is a local or global axis.

Although, what you probably want is a "pivot point"/"attachment point" matrix in between the two so that you can control the point relative to the camera where the weapon is actually attached and even rotate the pivot point to rotate the weapon rather than rotating the weapon itself. That would be inverse view times the pivot point matrix times the weapon's world/object matrix and use the result to draw the weapon. Again, you might have to reverse the order of multiplication to get the desired result. The pivot point becomes an origin relative to the camera's position and orientation which is relative to the world origin.And the weapon becomes relative to the pivot point.

That's similar to what I have going on in this code:

        //Yellow cube controls.
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_I && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
Cube.Transform(glm::translate(glm::mat4(), glm::vec3(0.0f, 0.0f, 0.05f)));
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_K && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
Cube.Transform(glm::translate(glm::mat4(), glm::vec3(0.0f, 0.0f, -0.05f)));
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_L && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_J && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)

if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_Y && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
Cube2Pivot =   glm::rotate(glm::mat4(), glm::radians<float>(1), glm::vec3(0.0f, 0.0f, 1.0f)) * Cube2Pivot;
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_H && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
Cube2Pivot = glm::rotate(glm::mat4(), glm::radians<float>(-1), glm::vec3(0.0f, 0.0f, 1.0f)) * Cube2Pivot;

Cube2Pivot = Cube2Pivot * glm::rotate(glm::mat4(), glm::radians<float>(1), glm::vec3(0.0f, 1.0f, 0.0f));
Cube2.WorldMatrix = Cube.WorldMatrix * Cube2Pivot * Cube2World;


There I have a cube attached to another cube (rather than the camera) through a pivot/attachment point. So, every frame I set the second cube's world/object matrix equal to the primary cube's times the pivot's times the matrix I maintain for the second cube. Cube2World is the matrix I change for the attached object and Cube2.WorldMatrix is what I actually use to draw the second cube.

You can make chains with as many links in them as you want such as this. Each link is relative to the last.

Your pivot point looks like a simple transformation of your cube, which isn't recommended for first-person games I believe. Or am I missing the point here?

apply the transform relative to the camera, then apply the camer's worldmat, then draw.

if the camera is at the player's location in the world, with the player's rotation, and 5.5 feet off the ground,

and the weapon is 2 feet in front of and 2 feet below the camera facing the same way as the player, move the gun -2y, +2z, then apply the camrea's transform to the result, then draw. done deal.

note that inverting a mat can cause float errors. never invert a mat.

What is to be understood as "the camera's worldmat" ? It would be best if you showed some code examples. Oh, and if you mean the model matrix of the camera, you can clearly see from the OP that there is no such thing available.

##### Share on other sites
Your pivot point looks like a simple transformation of your cube, which isn't recommended for first-person games I believe. Or am I missing the point here?

It's matrix algebra. So, yes, it's basically a simple transformation. Why would it not be recommended, especially for first-person 3D games?

This is basically how the graphics card works. Most all 3D shaders are going to want a projection, view, and world/object matrix to draw an object. When I can, I keep as close to those matrices as possible. In some cases, I have to use other storage, but as much as possible I store the data in what will be these three matrices that get sent to the graphics card as requirements to draw.

In the case above, I will draw two cubes: Cube and Cube2. Cube2 is a child of Cube and orbits its parent as an attachment. I could have that relationship be direct but I wanted to change the orbit without directly affecting either cube directly. So, I created a matrix between them as the origin of Cube2. So, now Cube is the grandparent, the pivot point is the parent, and Cube2 is the child. These could have been 3 objects that are all parented together like a body, an upper arm, and a lower arm. This is basically how "bones" or a humanoid armature works that controls a 3D humanoid model. All the bones are basically just matrices that work pretty much exactly like this. I have a code example where I export Blender animation data of the standard humanoid armature used to animate humanoid models in Blender. I exported a handful of animations and played the animations back in my code as stick figures in the example code. Pretty much all of that is linked matrices like this; every bone of the armature is parent-child matrices like this.

Granted, none of this is working directly with the camera, but the view matrix is basically just another world/object matrix other than it does everything backwards (it's inverted).

A lot of times, quaternions will be used instead of matrices. A quaternion is basically the same thing as a 3 by 3 matrix in that it contains an orientation. A 4 by 4 matrix has the advantage of also being able to contain the position simultaneously, which the quaternion cannot do. The quaternion, in this case however, has the advantage of being able to natively do SLERP (Spherical Linear IntERPolation). SLEPR is the "cousin" of LERP (Linear IntERPolation) which is basically just a weighted average between two numbers or pieces of data. The only way I know how to do SLERP is to decompose the matrix and once you pull the data out of it you can do the SLERP and then put the data back into the matrix, which is pretty ugly in terms of CPU cycles and such. Quaternions handle SLERP natively without decomposing and are thus quite superior to matrices for SLERP.

SLERP is needed for keyframe animation. When you have your skinned model like the humanoid model I mention above, the animation data only contains key frames. Those are poses of the model at given frames. Your code has to create the poses of the model between the keyframes to transition the model from one key frame to the next. The poses are basically just matrices of the bones of the model. Or you can use quaternions, plus a position, to store the same bone data. Either way you have to SLERP the orientation values between poses and quaternions, as far as I can tell, handle that far more elegantly. But it's basically the same thing between quaternions and matrices other than that. You can certainly use matrices, but then you have to decompose them and recompose them every time you do an interpolation, and skinned animation play back is a whole lot of interpolating.

But as long as you don't have to interpolate values between values such as finding the matrix 30% between two other matrices, then I would consider matrices superior to quaternions since the graphics card is going to require you to submit a matrix, not a quaternion when it draws the object. At least, I haven't come across a shader yet that looks for quaternions as input, but perhaps they are out there and I just haven't seen it yet. As far as any shader I've seen yet, you would have to convert every last one of those quaternions into 3 by 3 and then 4 by 4 matrices before drawing.

As far as chaining matrices like in the above code example, I don't see any reason why not. How else would you do skinned animation (other than a quaternion of course which is basically the same thing for the purpose of game programming)?

Oh, and if you mean the model matrix of the camera, you can clearly see from the OP that there is no such thing available.

Actually, it is available in the code you posted. That's what this is since it's declared public:

public:
glm::mat4 View() const { return glm::lookAt(m_Position, m_Target + m_Position, m_Up); }


The view matrix is the "model" matrix of the camera, which is part of the point I've been driving at. It's inverted compared to normal "model" matrices. But it's still for all practical purposes the same thing. You can either invert it in order to treat it like any other model matrix, or you can simply understand that it is backwards and perform all operations using it backwards. I like inverting it myself.

Edited by BBeck

##### Share on other sites

Your pivot point looks like a simple transformation of your cube, which isn't recommended for first-person games I believe. Or am I missing the point here?

It's matrix algebra. So, yes, it's basically a simple transformation. Why would it not be recommended, especially for first-person 3D games?

This is basically how the graphics card works. Most all 3D shaders are going to want a projection, view, and world/object matrix to draw an object. When I can, I keep as close to those matrices as possible. In some cases, I have to use other storage, but as much as possible I store the data in what will be these three matrices that get sent to the graphics card as requirements to draw.

In the case above, I will draw two cubes: Cube and Cube2. Cube2 is a child of Cube and orbits its parent as an attachment. I could have that relationship be direct but I wanted to change the orbit without directly affecting either cube directly. So, I created a matrix between them as the origin of Cube2. So, now Cube is the grandparent, the pivot point is the parent, and Cube2 is the child. These could have been 3 objects that are all parented together like a body, an upper arm, and a lower arm. This is basically how "bones" or a humanoid armature works that controls a 3D humanoid model. All the bones are basically just matrices that work pretty much exactly like this. I have a code example where I export Blender animation data of the standard humanoid armature used to animate humanoid models in Blender. I exported a handful of animations and played the animations back in my code as stick figures in the example code. Pretty much all of that is linked matrices like this; every bone of the armature is parent-child matrices like this.

Granted, none of this is working directly with the camera, but the view matrix is basically just another world/object matrix other than it does everything backwards (it's inverted).

A lot of times, quaternions will be used instead of matrices. A quaternion is basically the same thing as a 3 by 3 matrix in that it contains an orientation. A 4 by 4 matrix has the advantage of also being able to contain the position simultaneously, which the quaternion cannot do. The quaternion, in this case however, has the advantage of being able to natively do SLERP (Spherical Linear IntERPolation). SLEPR is the "cousin" of LERP (Linear IntERPolation) which is basically just a weighted average between two numbers or pieces of data. The only way I know how to do SLERP is to decompose the matrix and once you pull the data out of it you can do the SLERP and then put the data back into the matrix, which is pretty ugly in terms of CPU cycles and such. Quaternions handle SLERP natively without decomposing and are thus quite superior to matrices for SLERP.

SLERP is needed for keyframe animation. When you have your skinned model like the humanoid model I mention above, the animation data only contains key frames. Those are poses of the model at given frames. Your code has to create the poses of the model between the keyframes to transition the model from one key frame to the next. The poses are basically just matrices of the bones of the model. Or you can use quaternions, plus a position, to store the same bone data. Either way you have to SLERP the orientation values between poses and quaternions, as far as I can tell, handle that far more elegantly. But it's basically the same thing between quaternions and matrices other than that. You can certainly use matrices, but then you have to decompose them and recompose them every time you do an interpolation, and skinned animation play back is a whole lot of interpolating.

But as long as you don't have to interpolate values between values such as finding the matrix 30% between two other matrices, then I would consider matrices superior to quaternions since the graphics card is going to require you to submit a matrix, not a quaternion when it draws the object. At least, I haven't come across a shader yet that looks for quaternions as input, but perhaps they are out there and I just haven't seen it yet. As far as any shader I've seen yet, you would have to convert every last one of those quaternions into 3 by 3 and then 4 by 4 matrices before drawing.

As far as chaining matrices like in the above code example, I don't see any reason why not. How else would you do skinned animation (other than a quaternion of course which is basically the same thing for the purpose of game programming)?

Oh, and if you mean the model matrix of the camera, you can clearly see from the OP that there is no such thing available.

Actually, it is available in the code you posted. That's what this is since it's declared public:

public:
glm::mat4 View() const { return glm::lookAt(m_Position, m_Target + m_Position, m_Up); }


The view matrix is the "model" matrix of the camera, which is part of the point I've been driving at. It's inverted compared to normal "model" matrices. But it's still for all practical purposes the same thing. You can either invert it in order to treat it like any other model matrix, or you can simply understand that it is backwards and perform all operations using it backwards. I like inverting it myself.

Good explanation, I seem to understand now.

1. 1
2. 2
3. 3
4. 4
5. 5
Rutin
15

• 14
• 9
• 9
• 9
• 10
• ### Forum Statistics

• Total Topics
632912
• Total Posts
3009197
• ### Who's Online (See full list)

There are no registered users currently online

×