Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualKhatharr

Posted 10 November 2012 - 02:01 PM

The three transforms:

World - Changes the vertices from local space to world space. This assigns a position/rotation/scale to the model in the world space. It can be thought of as placing an object in a room.
View - Changes the vertices from world space to 'camera' space. This essentially changes everything so that the camera is the origin. It can be thought of as positioning the camera to look at the object.
Projection - This is the weird one. The camera angle creates a sort of rectangular cone shape. In order to get pixel colors we want that to be a 3D rectangle. In other words we want the near plane of the frustum (the visible area) to have the same width and height of the far plane of the frustum. The result is that things nearer to the camera get stretched a bit so they look closer. Once this is done the graphics hardware can look in straight lines from the center of where each pixel maps to the near plane to the same x/y coordinate on the far plane and if there's an intersection then we fetch a color value for the point on the polygon where the intersection occurs.

Since these transforms can all be described as matrices they can all be multiplied to get a single matrix that can correctly perform all three steps on each vertex in the model.

Here's a Microsoft article that discusses it a bit:

http://msdn.microsof...9(v=vs.85).aspx

You could also use matrices or quaternions to implement complicated rotations in three dimensional space, such as for a flight simulator or spacecraft simulator where where you have to compound several angles/vectors together.

#2Khatharr

Posted 10 November 2012 - 01:56 PM

The three transforms:

World - Changes the vertices from local space to world space. This assigns a position/rotation/scale to the model in the world space. It can be thought of as placing an object in a room.
View - Changes the vertices from world space to 'camera' space. This essentially changes everything so that the camera is the origin. It can be thought of as positioning the camera to look at the object.
Projection - This is the weird one. The camera angle creates a sort of rectangular cone shape. In order to get pixel colors we want that to be a 3D rectangle. In other words we want the near plane of the frustum (the visible area) to have the same width and height of the far plane of the frustum. The result is that things nearer to the camera get stretched a bit so they look closer. Once this is done the graphics hardware can look in straight lines from the center of where each pixel maps to the near plane to the same x/y coordinate on the far plane and if there's an intersection then we fetch a color value for the point on the polygon where the intersection occurs.

Since these transforms can all be described as matrices they can all be multiplied to get a single matrix that can correctly perform all three steps on each vertex in the model.

Here's a Microsoft article that discusses it a bit:

http://msdn.microsof...9(v=vs.85).aspx

#1Khatharr

Posted 10 November 2012 - 01:54 PM

The three transforms:

World - Changes the vertices from local space to world space. This assigns a position/rotation/scale to the model in the world space. It can be thought of as placing an object in a room.
View - Changes the vertices from world space to 'camera' space. This essentially changes everything so that the camera is the origin. It can be thought of as positioning the camera to look at the object.
Projection - This is the weird one. The camera angle creates a sort of rectangular cone shape. In order to get pixel colors we want that to be a 3D rectangle. In other words we want the near plane of the frustum (the visible area) to have the same width and height of the far plane of the frustum. Once this is done the graphics hardware can look in straight lines from the center of where each pixel maps to the near plane to the same x/y coordinate on the far plane and if there's an intersection then we fetch a color value for the point on the polygon where the intersection occurs.

Since these transforms can all be described as matrices they can all be multiplied to get a single matrix that can correctly perform all three steps on each vertex in the model.

Here's a Microsoft article that discusses it a bit:

http://msdn.microsoft.com/en-us/library/windows/desktop/bb206269%28v=vs.85%29.aspx

PARTNERS