Public Group

Normalising Ranges to -1 to 1

Recommended Posts

glm::mat4 proj = glm::ortho(0.0F,960.0F,0.0F,540.0F,-1.0F,1.0F);
glm::vec4 vp(100.0F, 100.0F, 0.0F, 1.0F);
glm::vec4 result = proj * vp;

How exactly are these calculations performed? What is the actual equation.

Share on other sites

The actual call is in glm/gtc/matrix_transform.inl.

template <typename T>
GLM_FUNC_QUALIFIER tmat4x4<T, defaultp> ortho
(
T left,
T right,
T bottom,
T top,
T zNear,
T zFar
)
{
tmat4x4<T, defaultp> Result(1);
Result[0][0] = static_cast<T>(2) / (right - left);
Result[1][1] = static_cast<T>(2) / (top - bottom);
Result[2][2] = - static_cast<T>(2) / (zFar - zNear);
Result[3][0] = - (right + left) / (right - left);
Result[3][1] = - (top + bottom) / (top - bottom);
Result[3][2] = - (zFar + zNear) / (zFar - zNear);
return Result;
}

I've always liked http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/ for a refresher.

Share on other sites
Posted (edited)

This stuff is so hard to get a grasp of as a beginner. I haven't yet managed to understand any of the tutorials I have been following, including that one.

What do the two array operators represent ?

[i][i]

And why is there no [3][3]

Edited by calioranged

Share on other sites
Posted (edited)

Don't feel bad. Been playing this game for a while and still have trouble with the divide by w concept.

Give this [ guy ] a listen. Maybe not do his software rendering tutorial series but at least watch. Well worth the time.

Edited by GoliathForge

Thanks a lot.

Share on other sites
1 hour ago, calioranged said:

And why is there no [3][3] ﻿

Because it's not being changed from the value set in the constructor. The same with the other missing values.

Share on other sites
Posted (edited)
1 hour ago, calioranged said:

And why is there no [3][3]

The starting matrix is the identity matrix, a 4x4 matrix that has zeros except for the [0][0] to [3][3] diagonal which is all ones. Not all elements need to be computed.

The ortho (or persp) matrix takes vertices (naively one at a time) from the scene and smashes them onto a plane based on the camera location ( this is only somewhat true, the w is passed to the shader as a divisor to calculate the final location ). The equations (in the form of a matrix) are derived from the camera settings.

If your camera is centered and symmetrical, the equations are fairly easy for an ortho camera.

r l t b f n are right left top bottom far near

Edited by fleabay

Share on other sites
Posted (edited)
glm::mat4 proj = glm::ortho(0.0F,960.0F,0.0F,540.0F,-1.0F,1.0F);
glm::vec4 vp(100.0F, 100.0F, 0.0F, 1.0F);
glm::vec4 result = proj * vp;

After multiplying the above matrix by the vector of x,y,z,w coordinates, I can now see how the normalisation works:

X: ((2/960-0)*100)+(0*100) + (0*0) + ((-960+0/960-0)*1) = −0.7916666667

Y: (0*100)+((2/540-0)*100) + (0*0) + ((-540+0/540-0)*1) = −0.6296296296

Z: (0*100)+(0*100) + ((-2/1--1)*0) + ((- 1+-1/1--1)*1) = 0.0

W: (0*100)+(0*100)+(0*0) + (1*1) = 1.0

But how are the matrix elements decided in the first place? For example why is (2/right-left) in the top left corner, why is           (- right+left/right-left) in the top right corner. Is there a computation before hand that determines where these matrix elements are placed?

Edited by calioranged

• 10
• 10
• 12
• 10
• 33