Jump to content
  • Advertisement
Sign in to follow this  
redneon

Calculating view matrix from a free cam

This topic is 2478 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

As per tradition, every couple of years I start to write a new framework/engine and I've just been thinking about something I've always done which might not be the best way of doing it so I thought I'd ask for some advice.

I always have an FPS style free cam which I use for my model viewer etc. and the way I always calculate the view matrix in this cam is along these lines:

m_pitch += pitchThisFrame;
m_yaw += yawThisFrame;

Matrix pitchMat = Matrix::CreateXRotation(m_pitch);
Matrix yawMat = Matrix::CreateYRotation(m_yaw);
Matrix rotation = yawMat * pitchMat;

const Vector& left = rotation.GetColumn0();
const Vector& up = rotation.GetColumn1();
const Vector& forward = rotation.GetColumn2();

m_position += left * xTransThisFrame;
m_position += forward * zTransThisFrame;

m_view = Matrix::CreateLookAt(m_position, m_position + forward, up);


As you can see, I basically have my free cam class store a vector for the position and then pitch and yaw values for the rotation of the camera. Then I use my CreateLookAt() function to generate the view matrix from these values each frame.

This has always worked for me but I have been wondering if this is the "correct" way to do it. Perhaps it would be better to just modify the view matrix directly each frame? Though, I suppose I would have to orthonormalise regularly if I went down that route...

Share this post


Link to post
Share on other sites
Advertisement
A common way, for example the way in which Halo * does it, it to store a forward, right, and up vector, along with position.

The vectors are rotated in unison when the camera changes view angles. One of the view directions (typically the right vector) could be omitted and derived when needed, but I prefer to keep them all because it makes a lot of other things faster.

In either case you need to use vectors. A simple yaw/pitch system does not allow for rolling, which could be important for any FPS game to simulate head-bobbing as you walk.


L. Spiro

Share this post


Link to post
Share on other sites
But that's storing exactly the same data as the view matrix, albeit in separate variables. Surely with that method you're just duplicating data that can already be obtained and modified by accessing the view matrix directly?

Share this post


Link to post
Share on other sites
A matrix is a specific format in which the graphics engine expects data. It has no meaning to a camera, except in the special case in which the camera knows the graphics library is going to ask it for this information and thus knows how to generate it.

Which is, as implied by out what you pointed, very easy.

It is your engine and you can do as you please. But it is not generally done as an actual matrix for the sake of abstraction. The camera’s orientation is actually no different from any other object’s orientation, which means it also has a scale factor.

A camera is unlikely to use it, but because it is simply “another object in the world”, it contains a “COrientation” instance member which includes the above information and a scale factor.

By abstracting it in this manner you don’t have to write specialized code for cameras. Cameras are updated in exactly the same manner as every other object.


L. Spiro

Share this post


Link to post
Share on other sites
Yeah, I suppose. I didn't think about the scale factor, really. So, if I were to store position, left, up and forward separately I'd want to update my camera something like this:

Matrix pitchMat = Matrix::CreateXRotation(m_pitch);
Matrix yawMat = Matrix::CreateYRotation(m_yaw);
Matrix rotation = yawMat * pitchMat;

m_up = rotation * m_up;

m_forward = rotation * m_forward;
m_left = rotation * m_left;

m_position += m_left * xTransThisFrame;
m_position += m_forward * zTransThisFrame;


Then I could either set the view matrix directly like this:

m_view.SetColumn0(m_left);
m_view.SetColumn1(m_up);
m_view.SetColumn2(m_forward);
m_view.SetColumn3(m_position);


Or continue to use my CreateLookAt() function like this:

m_view = Matrix::CreateLookAt(m_position, m_position + m_forward, m_up);

I haven't tested any of this out yet, of course. Just milling things over in my head :)

Share this post


Link to post
Share on other sites
Your example shows you storing the camera direction via pitch and yaw, updating the look vectors, and then using that to create the matrix.
You need to let go of the yaw/pitch concept in its entirety.

This is more what I had in mind:



class CCamera : public CActor {
protected :
// == Members.

/**
* Orientation.
*/
mutable CVector3 m_vRight,
m_vUp,
m_vForward;

/**
* Position.
*/
CVector3 m_vPos;


private :
typedef CActor Parent;
};





/**
* Set the rotation component of this object from forward and up vectors. The right vector
* is derived automatically.
*
* \param _vForward The forward component of the rotation.
* \param _vUp The up component of the rotation.
*/
LSVOID LSE_FCALL CCamera::SetRotation( const CVector3 &_vForward, const CVector3 &_vUp ) {
m_vForward = _vForward;
m_vUp = _vUp;
m_vRight = _vUp % _vForward;
m_ui32Dirty |= (LSM_ODF_ROTATION | LSM_ODF_NORMALIZATION);
}





/**
* Add a rotation to the existing rotation of this object.
*
* \param _mMat The rotation, in matrix form, to add to the existing rotation of this object.
*/
LSVOID LSE_FCALL CCamera::AddRotation( const CMatrix4x4 &_mMat ) {
m_vRight.Set(
CVector3::DotProduct( m_vRight, CVector3( _mMat._11, _mMat._21, _mMat._31 ) ),
CVector3::DotProduct( m_vRight, CVector3( _mMat._12, _mMat._22, _mMat._32 ) ),
CVector3::DotProduct( m_vRight, CVector3( _mMat._13, _mMat._23, _mMat._33 ) ) );

m_vForward.Set(
CVector3::DotProduct( m_vForward, CVector3( _mMat._11, _mMat._21, _mMat._31 ) ),
CVector3::DotProduct( m_vForward, CVector3( _mMat._12, _mMat._22, _mMat._32 ) ),
CVector3::DotProduct( m_vForward, CVector3( _mMat._13, _mMat._23, _mMat._33 ) ) );

m_vUp.Set(
CVector3::DotProduct( m_vUp, CVector3( _mMat._11, _mMat._21, _mMat._31 ) ),
CVector3::DotProduct( m_vUp, CVector3( _mMat._12, _mMat._22, _mMat._32 ) ),
CVector3::DotProduct( m_vUp, CVector3( _mMat._13, _mMat._23, _mMat._33 ) ) );

m_ui32Dirty |= (LSM_ODF_ROTATION | LSM_ODF_NORMALIZATION);
}



Etc.


Note that in my actual organization, the camera does not have these vectors.
The camera inherits from CActor, as all objects do, which has a COrientation member, which has these vectors and these functions.


L. Spiro

Share this post


Link to post
Share on other sites
But the rotation matrix you're passing into AddRotation() has to be created somewhere so what's wrong with creating it via pitch, yaw and roll changes (from controller input) each update?

I haven't yet added a Quaternion class otherwise I'd use that instead of pitch, yaw and roll. The only reason I'm using just pitch and yaw is because I don't want to encounter gimble lock and for a simple model view camera I only need pitch and yaw.

Share this post


Link to post
Share on other sites
Another possibility is to do away with rotation matrices and instead use Quaternions to represent rotations in your system. While they will require expansion to a matrix for final rendering usage they require both less space and less maths to perform the same operations as you would with a matrix.

(They also make certain operations much easier, such as smooth interpolation between two points.)

Share this post


Link to post
Share on other sites
Yeah, Quaternions are on my list of things to add. I'll get there eventually. Probably when I actually want to interpolate between rotations, funnily enough :D

Share this post


Link to post
Share on other sites
My solution looks like so:

The orientation is stored as a unit-quaternion, and the position is stored as a vector (no scaling here). Both together define the placement. A placement is given w.r.t. the world as long as not used as local placement due to an installed controller. E.g. the parenting controller uses a local placement and the parental world placement to compute the world placement of the entity being responsible for. The spatial services (a kind of sub-system) is responsible to ensure that the placement variables are converted to a placement matrix when it comes to rendering.

User input is some kind of control (in this sense like to the parenting control mentioned above): Its job is to compute a new orientation and/or position of a placement. How it does this is irrelevant w.r.t. the placement, but of course not so w.r.t. the user experience. IMHO, when you have a need for a yaw + pitch control then do it that way. But I would not burden the placement or the camera with these details. Instead, I have a placement for each entity, and the camera is just another entity so making no difference here.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!