# Excerpt from OpenGL® Programming Guide 6th Ed.

matrix transformation viewing projection modeling scene example camera

**Ed Note:**The links in this document will not work properly. Please use the Featured version for web browsing. http://www.informit.com/ShowCover.asp?isbn=0321481003&type=c Excerpt from

*OpenGL® Programming Guide: The Official Guide to Learning OpenGL®, Version 2.1, 6th Edition*. By Dave Shreiner, Mason Woo, Jackie Neider, and Tom Davis

Published by Addison Wesley Professional

ISBN-10: 0-321-48100-3

ISBN-13: 978-0-321-48100-9

**Chapter Objectives**

After reading this chapter, you'll be able to do the following:

- View a
*geometric model*in any orientation by transforming it in three-dimensional space - Control the location in three-dimensional space from which the model is viewed
- Clip undesired portions of the model out of the scene that's to be viewed
- Manipulate the appropriate matrix stacks that control model transformation for viewing, and project the model onto the screen
- Combine multiple transformations to mimic sophisticated systems in motion, such as a solar system or an articulated robot arm
- Reverse or mimic the operations of the geometric processing pipeline

Look at the image on the cover of this book. The program that produced that image contained a single geometric description of a building block. Each block was carefully positioned in the scene: some blocks were scattered on the floor, some were stacked on top of each other on the table, and some were assembled to make the globe. Also, a particular viewpoint had to be chosen. Obviously, we wanted to look at the corner of the room containing the globe. But how far away from the scene—and where exactly—should the viewer be? We wanted to make sure that the final image of the scene contained a good view out the window, that a portion of the floor was visible, and that all the objects in the scene were not only visible but presented in an interesting arrangement. This chapter explains how to use OpenGL to accomplish these tasks: how to position and orient models in three-dimensional space and how to establish the location—also in three-dimensional space—of the viewpoint. All of these factors help determine exactly what image appears on the screen.

You want to remember that the point of computer graphics is to create a two-dimensional image of three-dimensional objects (it has to be two-dimensional because it's drawn on a flat screen), but you need to think in three-dimensional coordinates while making many of the decisions that determine what is drawn on the screen. A common mistake people make when creating three-dimensional graphics is to start thinking too soon that the final image appears on a flat, two-dimensional screen. Avoid thinking about which pixels need to be drawn, and instead try to visualize three-dimensional space. Create your models in some three-dimensional universe that lies deep inside your computer, and let the computer do its job of calculating which pixels to color.

A series of three computer operations converts an object's three-dimensional coordinates to pixel positions on the screen:

- Transformations, which are represented by matrix multiplication, include modeling, viewing, and projection operations. Such operations include rotation, translation, scaling, reflecting, orthographic projection, and perspective projection. Generally, you use a combination of several transformations to draw a scene.
- Since the scene is rendered on a rectangular window, objects (or parts of objects) that lie outside the window must be clipped. In three-dimensional computer graphics, clipping occurs by throwing out objects on one side of a clipping plane.
- Finally, a correspondence must be established between the transformed coordinates and screen pixels. This is known as a
*viewport*transformation.

- "Overview: The Camera Analogy" gives an overview of the transformation process by describing the analogy of taking a photograph with a camera, presents a simple example program that transforms an object, and briefly describes the basic OpenGL transformation commands.
- "Viewing and Modeling Transformations" explains in detail how to specify and imagine the effect of viewing and modeling transformations. These transformations orient the model and the camera relative to each other to obtain the desired final image.
- "Projection Transformations" describes how to specify the shape and orientation of the
*viewing volume*. The viewing volume determines how a scene is projected onto the screen (with a perspective or orthographic projection) and which objects or parts of objects are clipped out of the scene. - "Viewport Transformation" explains how to control the conversion of three-dimensional model coordinates to screen coordinates.
- "Troubleshooting Transformations" presents some tips for discovering why you might not be getting the desired effect from your modeling, viewing, projection, and viewport transformations.
- "Manipulating the Matrix Stacks" discusses how to save and restore certain transformations. This is particularly useful when you're drawing complicated objects that are built from simpler ones.
- "Additional Clipping Planes" describes how to specify additional clipping planes beyond those defined by the viewing volume.
- "Examples of Composing Several Transformations" walks you through a couple of more complicated uses for transformations.
- "Reversing or Mimicking Transformations" shows you how to take a transformed point in window coordinates and reverse the transformation to obtain its original object coordinates. The transformation itself (without reversal) can also be emulated.

**Overview: The Camera Analogy**

The transformation process used to produce the desired scene for viewing is analogous to taking a photograph with a camera. As shown in Figure 3-1, the steps with a camera (or a computer) might be the following:

- Set up your tripod and point the camera at the scene (viewing transformation).
- Arrange the scene to be photographed into the desired composition (modeling transformation).
- Choose a camera lens or adjust the zoom (projection transformation).
- Determine how large you want the final photograph to be—for example, you might want it enlarged (viewport transformation).

**Figure 3-1**

The Camera Analogy

Note that these steps correspond to the order in which you specify the desired transformations in your program, not necessarily the order in which the relevant mathematical operations are performed on an object's vertices. The viewing transformations must precede the modeling transformations in your code, but you can specify the projection and viewport transformations at any point before drawing occurs. Figure 3-2 shows the order in which these operations occur on your computer.

**Figure 3-2**

Stages of Vertex Transformation

To specify viewing, modeling, and projection transformations, you construct a 4 x 4 matrix

**M**, which is then multiplied by the coordinates of each vertex

*v*in the scene to accomplish the transformation:

*v*

**' = Mv**(Remember that vertices always have four coordinates (

*x, y, z, w*), although in most cases

*w*is 1, and for two-dimensional data,

*z*is 0.) Note that viewing and modeling transformations are automatically applied to surface normal vectors, in addition to vertices. (Normal vectors are used only in

*eye coordinates*.) This ensures that the normal vector's relationship to the vertex data is properly preserved.

The viewing and modeling transformations you specify are combined to form the modelview matrix, which is applied to the incoming

*object coordinates*to yield eye coordinates. Next, if you've specified additional clipping planes to remove certain objects from the scene or to provide cutaway views of objects, these clipping planes are applied.

After that, OpenGL applies the projection matrix to yield

*clip coordinates*. This transformation defines a viewing volume; objects outside this volume are clipped so that they're not drawn in the final scene. After this point, the

*perspective division*is performed by dividing coordinate values by

*w*, to produce

*normalized device coordinates*. (See Appendix F for more information about the meaning of the

*w*-coordinate and how it affects matrix transformations.) Finally, the transformed coordinates are converted to

*window coordinates*by applying the viewport transformation. You can manipulate the dimensions of the viewport to cause the final image to be enlarged, shrunk, or stretched.

You might correctly suppose that the

*x-*and

*y*-coordinates are sufficient to determine which pixels need to be drawn on the screen. However, all the transformations are performed on the

*z*-coordinates as well. This way, at the end of this transformation process, the

*z*-values correctly reflect the depth of a given vertex (measured in distance away from the screen). One use for this depth value is to eliminate unnecessary drawing. For example, suppose two vertices have the same

*x-*and

*y*-values but different

*z*-values. OpenGL can use this information to determine which surfaces are obscured by other surfaces and can then avoid drawing the hidden surfaces. (See Chapter 5 and Chapter 10 for more information about this technique, which is called

*hidden-surface removal*.)

As you've probably guessed by now, you need to know a few things about matrix mathematics to get the most out of this chapter. If you want to brush up on your knowledge in this area, you might consult a textbook on linear algebra.

**A Simple Example: Drawing a Cube**

Example 3-1 draws a cube that's scaled by a modeling transformation (see Figure 3-3). The viewing transformation,

**gluLookAt()**, positions and aims the camera toward where the cube is drawn. A projection transformation and a viewport transformation are also specified. The rest of this section walks you through Example 3-1 and briefly explains the transformation commands it uses. The succeeding sections contain a complete, detailed discussion of all OpenGL transformation commands.

**Figure 3-3**

Transformed Cube

**Example 3-1**Transformed Cube: cube.cvoid init(void) { glClearColor(0.0, 0.0, 0.0, 0.0); glShadeModel(GL_FLAT); } void display(void) { glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0, 1.0, 1.0); glLoadIdentity(); /* clear the matrix */ /* viewing transformation */ gluLookAt(0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0); glScalef(1.0, 2.0, 1.0); /* modeling transformation */ glutWireCube(1.0); glFlush(); } void reshape(int w, int h) { glViewport(0, 0, (GLsizei) w, (GLsizei) h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glFrustum(-1.0, 1.0, -1.0, 1.0, 1.5, 20.0); glMatrixMode(GL_MODELVIEW); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(500, 500); glutInitWindowPosition(100, 100); glutCreateWindow(argv[0]); init(); glutDisplayFunc(display); glutReshapeFunc(reshape); glutMainLoop(); return 0; }

**The Viewing Transformation**

Recall that the viewing transformation is analogous to positioning and aiming a camera. In this code example, before the viewing transformation can be specified, the

*current matrix*is set to the identity matrix with

**glLoadIdentity()**. This step is necessary since most of the transformation commands multiply the current matrix by the specified matrix and then set the result to be the current matrix. If you don't clear the current matrix by loading it with the identity matrix, you continue to combine previous transformation matrices with the new one you supply. In some cases, you do want to perform such combinations, but you also need to clear the matrix sometimes.

In Example 3-1, after the matrix is initialized, the viewing transformation is specified with

**gluLookAt()**. The arguments for this command indicate where the camera (or eye position) is placed, where it is aimed, and which way is up. The arguments used here place the camera at (0, 0, 5), aim the camera lens toward (0, 0, 0), and specify the

*up-vector*as (0, 1, 0). The up-vector defines a unique orientation for the camera.

If

**gluLookAt()**was not called, the camera has a default position and orientation. By default, the camera is situated at the origin, points down the negative

*z*-axis, and has an up-vector of (0, 1, 0). Therefore, in Example 3-1, the overall effect is that

**gluLookAt()**moves the camera five units along the

*z*-axis. (See "Viewing and Modeling Transformations" for more information about viewing transformations.)

**The Modeling Transformation**

You use the modeling transformation to position and orient the model. For example, you can rotate, translate, or scale the model—or perform some combination of these operations. In Example 3-1,

**glScalef()**is the modeling transformation that is used. The arguments for this command specify how scaling should occur along the three axes. If all the arguments are 1.0, this command has no effect. In Example 3-1, the cube is drawn twice as large in the

*y*-direction. Thus, if one corner of the cube had originally been at (3.0, 3.0, 3.0), that corner would wind up being drawn at (3.0, 6.0, 3.0). The effect of this modeling transformation is to transform the cube so that it isn't a cube but a rectangular box.

**Try This**

Change the

**gluLookAt()**call in Example 3-1 to the modeling transformation

**glTranslatef()**with parameters (0.0, 0.0, –5.0). The result should look exactly the same as when you used

**gluLookAt()**. Why are the effects of these two commands similar?

Note that instead of moving the camera (with a viewing transformation) so that the cube could be viewed, you could have moved the cube away from the camera (with a modeling transformation). This duality in the nature of viewing and modeling transformations is why you need to think about the effects of both types of transformations simultaneously. It doesn't make sense to try to separate the effects, but sometimes it's easier to think about them in one way more than in the other. This is also why modeling and viewing transformations are combined into the

*modelview matrix*before the transformations are applied. (See "Viewing and Modeling Transformations" for more information about on how to think about modeling and viewing transformations and how to specify them to get the results you want.)

Also note that the modeling and viewing transformations are included in the

**display()**routine, along with the call that's used to draw the cube,

**glutWireCube()**. In this way,

**display()**can be used repeatedly to draw the contents of the window if, for example, the window is moved or uncovered, and you've ensured that the cube is drawn in the desired way each time, with the appropriate transformations. The potential repeated use of

**display()**underscores the need to load the identity matrix before performing the viewing and modeling transformations, especially when other transformations might be performed between calls to

**display()**.

**The Projection Transformation**

Specifying the projection transformation is like choosing a lens for a camera. You can think of this transformation as determining what the field of view or viewing volume is and therefore what objects are inside it and to some extent how they look. This is equivalent to choosing among wide-angle, normal, and telephoto lenses, for example. With a wide-angle lens, you can include a wider scene in the final photograph than you can with a telephoto lens, but a telephoto lens allows you to photograph objects as though they're closer to you than they actually are. In computer graphics, you don't have to pay $10,000 for a 2,000-millimeter telephoto lens; once you've bought your graphics workstation, all you need to do is use a smaller number for your field of view.

In addition to the field-of-view considerations, the projection transformation determines how objects are

*projected*onto the screen, as the term suggests. Two basic types of projections are provided for you by OpenGL, along with several corresponding commands for describing the relevant parameters in different ways. One type is the

*perspective*projection, which matches how you see things in daily life. Perspective makes objects that are farther away appear smaller; for example, it makes railroad tracks appear to converge in the distance. If you're trying to make realistic pictures, you'll want to choose perspective projection, which is specified with the

**glFrustum()**command in Example 3-1.

The other type of projection is

*orthographic*, which maps objects directly onto the screen without affecting their relative sizes. Orthographic projection is used in architectural and computer-aided design applications where the final image needs to reflect the measurements of objects, rather than how they might look. Architects create perspective drawings to show how particular buildings or interior spaces look when viewed from various vantage points; the need for orthographic projection arises when blueprint plans or elevations, which are used in the construction of buildings, are generated. (See "Projection Transformations" for a discussion of ways to specify both kinds of projection transformations.)

Before

**glFrustum()**can be called to set the projection transformation, some preparation is needed. As shown in the

**reshape()**routine in Example 3-1, the command called

**glMatrixMode()**is used first, with the argument GL_PROJECTION. This indicates that the current matrix specifies the projection transformation and that subsequent transformation calls affect the

*projection matrix*. As you can see, a few lines later,

**glMatrixMode()**is called again, this time with GL_MODELVIEW as the argument. This indicates that succeeding transformations now affect the modelview matrix instead of the projection matrix. (See "Manipulating the Matrix Stacks" for more information about how to control the projection and modelview matrices.)

Note that

**glLoadIdentity()**is used to initialize the current projection matrix so that only the specified projection transformation has an effect. Now

**glFrustum()**can be called, with arguments that define the parameters of the projection transformation. In this example, both the projection transformation and the viewport transformation are contained in the

**reshape()**routine, which is called when the window is first created and whenever the window is moved or reshaped. This makes sense, because both projecting (the width-to-height aspect ratio of the projection viewing volume) and applying the viewport relate directly to the screen, and specifically to the size or aspect ratio of the window on the screen.

**Try This**

Change the

**glFrustum()**call in Example 3-1 to the more commonly used Utility Library routine

**gluPerspective()**, with parameters (60.0, 1.0, 1.5, 20.0). Then experiment with different values, especially for

*fovy*and

*aspect*.

**The Viewport Transformation**

Together, the projection transformation and the viewport transformation determine how a scene is mapped onto the computer screen. The projection transformation specifies the mechanics of how the mapping should occur, and the viewport indicates the shape of the available screen area into which the scene is mapped. Since the viewport specifies the region the image occupies on the computer screen, you can think of the viewport transformation as defining the size and location of the final processed photograph—for example, whether the photograph should be enlarged or shrunk.

The arguments for

**glViewport()**describe the origin of the available screen space within the window—(0, 0) in this example—and the width and height of the available screen area, all measured in pixels on the screen. This is why this command needs to be called within

**reshape()**: if the window changes size, the viewport needs to change accordingly. Note that the width and height are specified using the actual width and height of the window; often, you want to specify the viewport in this way, rather than give an absolute size. (See "Viewport Transformation" for more information about how to define the viewport.)

**Drawing the Scene**

Once all the necessary transformations have been specified, you can draw the scene (that is, take the photograph). As the scene is drawn, OpenGL transforms each vertex of every object in the scene by the modeling and viewing transformations. Each vertex is then transformed as specified by the projection transformation and clipped if it lies outside the viewing volume described by the projection transformation. Finally, the remaining transformed vertices are divided by

*w*and mapped onto the viewport.

**General-Purpose Transformation Commands**

This section discusses some OpenGL commands that you might find useful as you specify desired transformations. You've already seen two of these commands:

**glMatrixMode()**and

**glLoadIdentity()**. Four commands described here—

**glLoadMatrix*()**,

**glLoadTransposeMatrix*()**,

**glMultMatrix*()**, and

**glMultTransposeMatrix*()**—allow you to specify any transformation matrix directly or to multiply the current matrix by that specified matrix. More specific transformation commands—such as

**gluLookAt()**and

**glScale*()**—are described in later sections.

As described in the preceding section, you need to state whether you want to modify the modelview or projection matrix before supplying a transformation command. You choose the matrix with

**glMatrixMode()**. When you use nested sets of OpenGL commands that might be called repeatedly, remember to reset the matrix mode correctly. (The

**glMatrixMode()**command can also be used to indicate the

*texture matrix*; texturing is discussed in detail in "The Texture Matrix Stack" in Chapter 9.)

void

**glMatrixMode**(GLenum

*mode*);

Specifies whether the modelview, projection, or texture matrix will be modified, using the argument GL_MODELVIEW, GL_PROJECTION, or GL_TEXTURE for

*mode*. Subsequent transformation commands affect the specified matrix. Note that only one matrix can be modified at a time. By default, the modelview matrix is the one that's modifiable, and all three matrices contain the identity matrix.

You use the

**glLoadIdentity()**command to clear the currently modifiable matrix for future transformation commands, as these commands modify the current matrix. Typically, you always call this command before specifying projection or viewing transformations, but you might also call it before specifying a modeling transformation.

void

**glLoadIdentity**(void);

Sets the currently modifiable matrix to the 4 x 4 identity matrix.

If you want to specify explicitly a particular matrix to be loaded as the current matrix, use

**glLoadMatrix*()**or

**glLoadTransposeMatrix*()**. Similarly, use

**glMultMatrix*()**or

**glMultTransposeMatrix*()**to multiply the current matrix by the matrix passed in as an argument.

void

**glLoadMatrix**{fd}(const

*TYPE**

*m*);

Sets the 16 values of the current matrix to those specified by

*m*.

void

**glMultMatrix**{fd}(const

*TYPE**

*m*);

Multiplies the matrix specified by the 16 values pointed to by

*m*by the current matrix and stores the result as the current matrix.

All matrix multiplication with OpenGL occurs as follows. Suppose the current matrix is

**C**and the matrix specified with

**glMultMatrix*()**or any of the transformation commands is

**M**. After multiplication, the final matrix is always

**CM**. Since matrix multiplication isn't generally commutative, the order makes a difference.

The argument for

**glLoadMatrix*()**and

**glMultMatrix*()**is a vector of 16 values (

*m*

_{1},

*m*

_{2}, ... ,

*m*

_{16}) that specifies a matrix

**M**stored in column-major order as follows:

If you're programming in C and you declare a matrix as

*m*[4][4], then the element

*m*

*[j]*is in the*i*th column and*j*th row of the common OpenGL transformation matrix. This is the reverse of the standard C convention in which*m*

Another way to avoid possible confusion is to call the OpenGL routines

void

Sets the 16 values of the current matrix to those specified by

void

Multiplies the matrix specified by the 16 values pointed to by

You might be able to maximize efficiency by using display lists to store frequently used matrices (and their inverses), rather than recomputing them. (See "Display List Design Philosophy" in Chapter 7.) OpenGL implementations often must compute the inverse of the modelview matrix so that normals and clipping planes can be correctly transformed to eye coordinates.

Viewing and modeling transformations are inextricably related in OpenGL and are in fact combined into a single modelview matrix. (See "A Simple Example: Drawing a Cube.") One of the toughest problems newcomers to computer graphics face is understanding the effects of combined three-dimensional transformations. As you've already seen, there are alternative ways to think about transformations—do you want to move the camera in one direction or move the object in the opposite direction? Each way of thinking about transformations has advantages and disadvantages, but in some cases one way more naturally matches the effect of the intended transformation. If you can find a natural approach for your particular application, it's easier to visualize the necessary transformations and then write the corresponding code to specify the matrix manipulations. The first part of this section discusses how to think about transformations; later, specific commands are presented. For now, we use only the matrix-manipulation commands you've already seen. Finally, keep in mind that you must call

Let's start with a simple case of two transformations: a 45-degree counterclockwise rotation about the origin around the

Rotating First or Translating First

Now let's talk about the order in which you specify a series of transformations. All viewing and modeling transformations are represented as 4 x 4 matrices. Each successive

Consider the following code sequence, which draws a single point using three transformations:

glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glMultMatrixf(N); /* apply transformation N */ glMultMatrixf(M); /* apply transformation M */ glMultMatrixf(L); /* apply transformation L */ glBegin(GL_POINTS); glVertex3f(v); /* draw transformed vertex v */ glEnd(); With this code, the modelview matrix successively contains

Thus, if you like to think in terms of a grand, fixed coordinate system—in which matrix multiplications affect the position, orientation, and scaling of your model—you have to think of the multiplications as occurring in the opposite order from how they appear in the code. Using the simple example shown on the left side of Figure 3-4 (a rotation about the origin and a translation along the

glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glMultMatrixf(T); /* translation */ glMultMatrixf®; /* rotation */ draw_the_object();

Another way to view matrix multiplications is to forget about a grand, fixed coordinate system in which your model is transformed and instead imagine that a local coordinate system is tied to the object you're drawing. All operations occur relative to this changing coordinate system. With this approach, the matrix multiplications now appear in the natural order in the code. (Regardless of which analogy you're using, the code is the same, but how you think about it differs.) To see this in the translation-rotation example, begin by visualizing the object with a coordinate system tied to it. The translation operation moves the object and its coordinate system down the

This approach is what you should use for applications such as articulated robot arms, where there are joints at the shoulder, elbow, and wrist, and on each of the fingers. To figure out where the tips of the fingers go relative to the body, you'd like to start at the shoulder, go down to the wrist, and so on, applying the appropriate rotations and translations at each joint. Thinking about it in reverse would be far more confusing.

This second approach can be problematic, however, in cases where scaling occurs, and especially so when the scaling is nonuniform (scaling different amounts along the different axes). After uniform scaling, translations move a vertex by a multiple of what they did before, as the coordinate system is stretched. Non-uniform scaling mixed with rotations may make the axes of the local coordinate system nonperpendicular.

As mentioned earlier, you normally issue viewing transformation commands in your program before any modeling transformations. In this way, a vertex in a model is first transformed into the desired orientation and then transformed by the viewing operation. Since the matrix multiplications must be specified in reverse order, the viewing commands need to come first. Note, however, that you don't need to specify either viewing or modeling transformations if you're satisfied with the default conditions. If there's no viewing transformation, the "camera" is left in the default position at the origin, pointing toward the negative

Since the commands for performing modeling transformations can be used to perform viewing transformations, modeling transformations are

The three OpenGL routines for modeling transformations are

In the command summaries that follow, each matrix multiplication is described in terms of what it does to the vertices of a geometric object using the fixed coordinate system approach, and in terms of what it does to the local coordinate system that's attached to an object.

void

Multiplies the current matrix by a matrix that moves (translates) an object by the given

Figure 3-5 shows the effect of

Translating an Object

Note that using (0.0, 0.0, 0.0) as the argument for

void

Multiplies the current matrix by a matrix that rotates an object (or the local coordinate system) in a counterclockwise direction about the ray from the origin through the point (

The effect of

Rotating an Object

Note that an object that lies farther from the axis of rotation is more dramatically rotated (has a larger orbit) than an object drawn near the axis. Also, if the

void

Multiplies the current matrix by a matrix that stretches, shrinks, or reflects an object along the axes. Each

Figure 3-7 shows the effect of

Scaling and Reflecting an Object

Example 3-2 is a portion of a program that renders a triangle four times, as shown in Figure 3-8. These are the four transformed triangles:

Modeling Transformation Example

glLoadIdentity(); glColor3f(1.0, 1.0, 1.0); draw_triangle(); /* solid lines */ glEnable(GL_LINE_STIPPLE); /* dashed lines */ glLineStipple(1, 0xF0F0); glLoadIdentity(); glTranslatef(-20.0, 0.0, 0.0); draw_triangle(); glLineStipple(1, 0xF00F); /*long dashed lines */ glLoadIdentity(); glScalef(1.5, 0.5, 1.0); draw_triangle(); glLineStipple(1, 0x8888); /* dotted lines */ glLoadIdentity(); glRotatef(90.0, 0.0, 0.0, 1.0); draw_triangle(); glDisable(GL_LINE_STIPPLE); Note the use of

If you have downloaded Nate Robins' suite of tutorial programs, this is an opportune time to run the

A viewing transformation changes the position and orientation of the viewpoint. If you recall the camera analogy, the viewing transformation positions the camera tripod, pointing the camera toward the model. Just as you move the camera to some position and rotate it until it points in the desired direction, viewing transformations are generally composed of translations and rotations. Also remember that to achieve a certain scene composition in the final image or photograph, you can either move the camera or move all the objects in the opposite direction. Thus, a modeling transformation that rotates an object counterclockwise is equivalent to a viewing transformation that rotates the camera clockwise, for example. Finally, keep in mind that the viewing transformation commands must be called before any modeling transformations are performed, so that the modeling transformations take effect on the objects first.

You can manufacture a viewing transformation in any of several ways, as described next. You can also choose to use the default location and orientation of the viewpoint, which is at the origin, looking down the negative

When you use modeling transformation commands to emulate viewing transformations, you're trying to move the viewpoint in a desired way while keeping the objects in the world stationary. Since the viewpoint is initially located at the origin and since objects are often most easily constructed there as well (see Figure 3-9), you generally have to perform some transformation so that the objects can be viewed. Note that, as shown in the figure, the camera initially points down the negative

Object and Viewpoint at the Origin

In the simplest case, you can move the viewpoint backward, away from the objects; this has the same effect as moving the objects forward, or away from the viewpoint. Remember that, by default, forward is down the negative

glTranslatef(0.0, 0.0, -5.0);

Separating the Viewpoint and the Object

This routine moves the objects in the scene –5 units along the

Now suppose you want to view the objects from the side. Should you issue a rotate command before or after the translate command? If you're thinking in terms of a grand, fixed coordinate system, first imagine both the object and the camera at the origin. You could rotate the object first and then move it away from the camera so that the desired side is visible. You know that with the fixed coordinate system approach, commands have to be issued in the opposite order in which they should take effect, so you know that you need to write the translate command in your code first and follow it with the rotate command.

Now let's use the local coordinate system approach. In this case, think about moving the object and its local coordinate system away from the origin; then, the rotate command is carried out using the now-translated coordinate system. With this approach, commands are issued in the order in which they're applied, so once again the translate command comes first. Thus, the sequence of transformation commands to produce the desired result is

glTranslatef(0.0, 0.0, -5.0); glRotatef(90.0, 0.0, 1.0, 0.0); If you're having trouble keeping track of the effect of successive matrix multiplications, try using both the fixed and local coordinate system approaches and see whether one makes more sense to you. Note that with the fixed coordinate system, rotations always occur about the grand origin, whereas with the local coordinate system, rotations occur about the origin of the local system. You might also try using the

Often, programmers construct a scene around the origin or some other convenient location and then want to look at it from an arbitrary point to get a good view of it. As its name suggests, the

The

void

Defines a viewing matrix and multiplies it to the right of the current matrix. The desired viewpoint is specified by

In the default position, the camera is at the origin, is looking down the negative

gluLookAt(0.0, 0.0, 0.0, 0.0, 0.0, -100.0, 0.0, 1.0, 0.0); The

Default Camera Position

Figure 3-12 shows the effect of a typical

Using gluLookAt()

Therefore, to achieve this effect, call

gluLookAt(4.0, 2.0, 1.0, 2.0, 4.0, -3.0, 2.0, 2.0, -1.0); Note that

If you have Nate Robins' suite of tutorial programs, run the

To transform any arbitrary vector so that it's coincident with another arbitrary vector (for instance, the negative

Note that computing the angle between two normalized vectors by taking the inverse cosine of their dot product is not very accurate, especially for small angles, but it should work well enough to get you started.

<a name="Heading24" id="Heading24">

For some specialized applications, you might want to define your own transformation routine. Since this is rarely done and is a fairly advanced topic, it's left mostly as an exercise for the reader. The following exercises suggest two custom viewing transformations that might be useful.

*[j]*is in row*i*and column*j*. One way to avoid confusion between the column and row is to declare your matrices as*m*[16].Another way to avoid possible confusion is to call the OpenGL routines

**glLoadTransposeMatrix*()**and**glMultTransposeMatrix*()**, which use row-major (the standard C convention) matrices as arguments.void

**glLoadTransposeMatrix**{fd}(const*TYPE***m*);Sets the 16 values of the current matrix to those specified by

*m*, whose values are stored in row-major order.**glLoadTransposeMatrix*(**has the same effect as*m)***glLoadMatrix*(**.*m*^{T)}void

**glMultTransposeMatrix**{fd}(const*TYPE***m*);Multiplies the matrix specified by the 16 values pointed to by

*m*by the current matrix and stores the result as the current matrix.**glMultTransposeMatrix*(**has the same effect as*m)***glMultMatrix*(**.*m*^{T)}You might be able to maximize efficiency by using display lists to store frequently used matrices (and their inverses), rather than recomputing them. (See "Display List Design Philosophy" in Chapter 7.) OpenGL implementations often must compute the inverse of the modelview matrix so that normals and clipping planes can be correctly transformed to eye coordinates.

**Viewing and Modeling Transformations**Viewing and modeling transformations are inextricably related in OpenGL and are in fact combined into a single modelview matrix. (See "A Simple Example: Drawing a Cube.") One of the toughest problems newcomers to computer graphics face is understanding the effects of combined three-dimensional transformations. As you've already seen, there are alternative ways to think about transformations—do you want to move the camera in one direction or move the object in the opposite direction? Each way of thinking about transformations has advantages and disadvantages, but in some cases one way more naturally matches the effect of the intended transformation. If you can find a natural approach for your particular application, it's easier to visualize the necessary transformations and then write the corresponding code to specify the matrix manipulations. The first part of this section discusses how to think about transformations; later, specific commands are presented. For now, we use only the matrix-manipulation commands you've already seen. Finally, keep in mind that you must call

**glMatrixMode()**with GL_MODELVIEW as its argument prior to performing modeling or viewing transformations.**Thinking about Transformations**Let's start with a simple case of two transformations: a 45-degree counterclockwise rotation about the origin around the

*z*-axis and a translation down the*x*-axis. Suppose that the object you're drawing is small compared with the translation (so that you can see the effect of the translation) and that it's originally located at the origin. If you rotate the object first and then translate it, the rotated object appears on the*x*-axis. If you translate it down the*x*-axis first, however, and then rotate about the origin, the object is on the line*y = x*, as shown in Figure 3-4. In general, the order of transformations is critical. If you do transformation A and then transformation B, you almost always get something different than if you do them in the opposite order.**Figure3-4**Rotating First or Translating First

Now let's talk about the order in which you specify a series of transformations. All viewing and modeling transformations are represented as 4 x 4 matrices. Each successive

**glMultMatrix*()**or transformation command multiplies a new 4 x 4 matrix**M**by the current modelview matrix**C**to yield**CM**. Finally, vertices*v*are multiplied by the current modelview matrix. This process means that the last transformation command called in your program is actually the first one applied to the vertices:**CMv**. Thus, one way of looking at it is to say that you have to specify the matrices in the reverse order. Like many other things, however, once you've gotten used to thinking about this correctly, backward will seem like forward.Consider the following code sequence, which draws a single point using three transformations:

glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glMultMatrixf(N); /* apply transformation N */ glMultMatrixf(M); /* apply transformation M */ glMultMatrixf(L); /* apply transformation L */ glBegin(GL_POINTS); glVertex3f(v); /* draw transformed vertex v */ glEnd(); With this code, the modelview matrix successively contains

**I**,**N**,**NM**, and finally**NML**, where**I**represents the identity matrix. The transformed vertex is**NMLv**. Thus, the vertex transformation is**N(M(Lv))**—that is,**v**is multiplied first by**L**, the resulting**Lv**is multiplied by**M**, and the resulting**MLv**is multiplied by**N**. Notice that the transformations to vertex**v**effectively occur in the opposite order than they were specified. (Actually, only a single multiplication of a vertex by the modelview matrix occurs; in this example, the**N**,**M**, and**L**matrices are already multiplied into a single matrix before it's applied to**v**.)**Grand, Fixed Coordinate System**Thus, if you like to think in terms of a grand, fixed coordinate system—in which matrix multiplications affect the position, orientation, and scaling of your model—you have to think of the multiplications as occurring in the opposite order from how they appear in the code. Using the simple example shown on the left side of Figure 3-4 (a rotation about the origin and a translation along the

*x*-axis), if you want the object to appear on the axis after the operations, the rotation must occur first, followed by the translation. To do this, you'll need to reverse the order of operations, so the code looks something like this (where**R**is the rotation matrix and**T**is the translation matrix):glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glMultMatrixf(T); /* translation */ glMultMatrixf®; /* rotation */ draw_the_object();

**Moving a Local Coordinate System**Another way to view matrix multiplications is to forget about a grand, fixed coordinate system in which your model is transformed and instead imagine that a local coordinate system is tied to the object you're drawing. All operations occur relative to this changing coordinate system. With this approach, the matrix multiplications now appear in the natural order in the code. (Regardless of which analogy you're using, the code is the same, but how you think about it differs.) To see this in the translation-rotation example, begin by visualizing the object with a coordinate system tied to it. The translation operation moves the object and its coordinate system down the

*x*-axis. Then, the rotation occurs about the (now-translated) origin, so the object rotates in place in its position on the axis.This approach is what you should use for applications such as articulated robot arms, where there are joints at the shoulder, elbow, and wrist, and on each of the fingers. To figure out where the tips of the fingers go relative to the body, you'd like to start at the shoulder, go down to the wrist, and so on, applying the appropriate rotations and translations at each joint. Thinking about it in reverse would be far more confusing.

This second approach can be problematic, however, in cases where scaling occurs, and especially so when the scaling is nonuniform (scaling different amounts along the different axes). After uniform scaling, translations move a vertex by a multiple of what they did before, as the coordinate system is stretched. Non-uniform scaling mixed with rotations may make the axes of the local coordinate system nonperpendicular.

As mentioned earlier, you normally issue viewing transformation commands in your program before any modeling transformations. In this way, a vertex in a model is first transformed into the desired orientation and then transformed by the viewing operation. Since the matrix multiplications must be specified in reverse order, the viewing commands need to come first. Note, however, that you don't need to specify either viewing or modeling transformations if you're satisfied with the default conditions. If there's no viewing transformation, the "camera" is left in the default position at the origin, pointing toward the negative

*z*-axis; if there's no modeling transformation, the model isn't moved, and it retains its specified position, orientation, and size.Since the commands for performing modeling transformations can be used to perform viewing transformations, modeling transformations are

*discussed*first, even if viewing transformations are actually*issued*first. This order for discussion also matches the way many programmers think when planning their code. Often, they write all the code necessary to compose the scene, which involves transformations to position and orient objects correctly relative to each other. Next, they decide where they want the viewpoint to be relative to the scene they've composed, and then they write the viewing transformations accordingly.**Modeling Transformations**The three OpenGL routines for modeling transformations are

**glTranslate*()**,**glRotate*()**, and**glScale*()**. As you might suspect, these routines transform an object (or coordinate system, if you're thinking of it in that way) by moving, rotating, stretching, shrinking, or reflecting it. All three commands are equivalent to producing an appropriate translation, rotation, or scaling matrix, and then calling**glMultMatrix*()**with that matrix as the argument. However, using these three routines might be faster than using**glMultMatrix*()**. OpenGL automatically computes the matrices for you. (See Appendix F if you're interested in the details.)In the command summaries that follow, each matrix multiplication is described in terms of what it does to the vertices of a geometric object using the fixed coordinate system approach, and in terms of what it does to the local coordinate system that's attached to an object.

**Translate**void

**glTranslate**{fd}(*TYPE**x*,*TYPE y*,*TYPE**z*);Multiplies the current matrix by a matrix that moves (translates) an object by the given

*x-*,*y*-, and*z*-values (or moves the local coordinate system by the same amounts).Figure 3-5 shows the effect of

**glTranslate*()**.**Figure 3-5**Translating an Object

Note that using (0.0, 0.0, 0.0) as the argument for

**glTranslate*()**is the identity operation—that is, it has no effect on an object or its local coordinate system.**Rotate**void

**glRotate**{fd}(TYPE*angle*, TYPE*x*, TYPE*y*, TYPE*z*);Multiplies the current matrix by a matrix that rotates an object (or the local coordinate system) in a counterclockwise direction about the ray from the origin through the point (

*x, y, z*). The*angle*parameter specifies the angle of rotation in degrees.The effect of

**glRotatef(**45.0, 0.0, 0.0, 1.0**)**, which is a rotation of 45 degrees about the*z*-axis, is shown in Figure 3-6.**Figure 3-6**Rotating an Object

Note that an object that lies farther from the axis of rotation is more dramatically rotated (has a larger orbit) than an object drawn near the axis. Also, if the

*angle*argument is zero, the**glRotate*()**command has no effect.**Scale**void

**glScale**{fd}(*TYPE**x*,*TYPE y*,*TYPE**z*);Multiplies the current matrix by a matrix that stretches, shrinks, or reflects an object along the axes. Each

*x-*,*y-*, and*z*-coordinate of every point in the object is multiplied by the corresponding argument*x*,*y*, or*z*. With the local coordinate system approach, the local coordinate axes are stretched, shrunk, or reflected by the*x-*,*y-*, and*z*-factors, and the associated object is transformed with them.Figure 3-7 shows the effect of

**glScalef(**2.0, –0.5, 1.0**)**.**Figure 3-7**Scaling and Reflecting an Object

**glScale*()**is the only one of the three modeling transformations that changes the apparent size of an object: scaling with values greater than 1.0 stretches an object, and using values less than 1.0 shrinks it. Scaling with a -1.0 value reflects an object across an axis. The identity values for scaling are (1.0, 1.0, 1.0). In general, you should limit your use of**glScale*()**to those cases where it is necessary. Using**glScale*()**decreases the performance of lighting calculations, because the normal vectors have to be renormalized after transformation.**Note -** A scale value of zero collapses all object coordinates along that axis to zero. It's usually not a good idea to do this, because such an operation cannot be undone. Mathematically speaking, the matrix cannot be inverted, and inverse matrices are required for certain lighting operations (see Chapter 5). Sometimes collapsing coordinates does make sense; the calculation of shadows on a planar surface is one such application (see "Shadows" in Chapter 14). In general, if a coordinate system is to be collapsed, the projection matrix should be used, rather than the modelview matrix.

**A Modeling Transformation Code Example**Example 3-2 is a portion of a program that renders a triangle four times, as shown in Figure 3-8. These are the four transformed triangles:

- A solid wireframe triangle is drawn with no modeling transformation.
- The same triangle is drawn again, but with a dashed line stipple, and translated (to the left—along the negative
*x*-axis). - A triangle is drawn with a long dashed line stipple, with its height (
*y*-axis) halved and its width (*x*-axis) increased by 50 percent. - A rotated triangle, made of dotted lines, is drawn.

**Figure 3-8**Modeling Transformation Example

**Example 3-2**Using Modeling Transformations: model.cglLoadIdentity(); glColor3f(1.0, 1.0, 1.0); draw_triangle(); /* solid lines */ glEnable(GL_LINE_STIPPLE); /* dashed lines */ glLineStipple(1, 0xF0F0); glLoadIdentity(); glTranslatef(-20.0, 0.0, 0.0); draw_triangle(); glLineStipple(1, 0xF00F); /*long dashed lines */ glLoadIdentity(); glScalef(1.5, 0.5, 1.0); draw_triangle(); glLineStipple(1, 0x8888); /* dotted lines */ glLoadIdentity(); glRotatef(90.0, 0.0, 0.0, 1.0); draw_triangle(); glDisable(GL_LINE_STIPPLE); Note the use of

**glLoadIdentity()**to isolate the effects of modeling transformations; initializing the matrix values prevents successive transformations from having a cumulative effect. Even though using**glLoadIdentity()**repeatedly has the desired effect, it may be inefficient, because you may have to respecify viewing or modeling transformations. (See "Manipulating the Matrix Stacks" for a better way to isolate transformations.)**Note -** Sometimes, programmers who want a continuously rotating object attempt to achieve this by repeatedly applying a rotation matrix that has small values. The problem with this technique is that because of round-off errors, the product of thousands of tiny rotations gradually drifts away from the value you really want (it might even become something that isn't a rotation). Instead of using this technique, increment the angle and issue a new rotation command with the new angle at each update step.

**Nate Robins' Transformation Tutorial**If you have downloaded Nate Robins' suite of tutorial programs, this is an opportune time to run the

**transformation**tutorial. (For information on how and where to download these programs, see "Nate Robins' OpenGL Tutors.") With this tutorial, you can experiment with the effects of rotation, translation, and scaling.**Viewing Transformations**A viewing transformation changes the position and orientation of the viewpoint. If you recall the camera analogy, the viewing transformation positions the camera tripod, pointing the camera toward the model. Just as you move the camera to some position and rotate it until it points in the desired direction, viewing transformations are generally composed of translations and rotations. Also remember that to achieve a certain scene composition in the final image or photograph, you can either move the camera or move all the objects in the opposite direction. Thus, a modeling transformation that rotates an object counterclockwise is equivalent to a viewing transformation that rotates the camera clockwise, for example. Finally, keep in mind that the viewing transformation commands must be called before any modeling transformations are performed, so that the modeling transformations take effect on the objects first.

You can manufacture a viewing transformation in any of several ways, as described next. You can also choose to use the default location and orientation of the viewpoint, which is at the origin, looking down the negative

*z*-axis.- Use one or more modeling transformation commands (that is,
**glTranslate*()**and**glRotate*()**). You can think of the effect of these transformations as moving the camera position or as moving all the objects in the world, relative to a stationary camera. - Use the Utility Library routine
**gluLookAt()**to define a line of sight. This routine encapsulates a series of rotation and translation commands. - Create your own utility routine to encapsulate rotations and translations. Some applications might require custom routines that allow you to specify the viewing transformation in a convenient way. For example, you might want to specify the roll, pitch, and heading rotation angles of a plane in flight, or you might want to specify a transformation in terms of polar coordinates for a camera that's orbiting around an object.

**Using glTranslate*() and glRotate*()**When you use modeling transformation commands to emulate viewing transformations, you're trying to move the viewpoint in a desired way while keeping the objects in the world stationary. Since the viewpoint is initially located at the origin and since objects are often most easily constructed there as well (see Figure 3-9), you generally have to perform some transformation so that the objects can be viewed. Note that, as shown in the figure, the camera initially points down the negative

*z*-axis. (You're seeing the back of the camera.)**Figure 3-9**Object and Viewpoint at the Origin

In the simplest case, you can move the viewpoint backward, away from the objects; this has the same effect as moving the objects forward, or away from the viewpoint. Remember that, by default, forward is down the negative

*z*-axis; if you rotate the viewpoint, forward has a different meaning. Therefore, to put five units of distance between the viewpoint and the objects by moving the viewpoint, as shown in Figure 3-10, useglTranslatef(0.0, 0.0, -5.0);

**Figure 3-10**Separating the Viewpoint and the Object

This routine moves the objects in the scene –5 units along the

*z*-axis. This is also equivalent to moving the camera +5 units along the*z*-axis.Now suppose you want to view the objects from the side. Should you issue a rotate command before or after the translate command? If you're thinking in terms of a grand, fixed coordinate system, first imagine both the object and the camera at the origin. You could rotate the object first and then move it away from the camera so that the desired side is visible. You know that with the fixed coordinate system approach, commands have to be issued in the opposite order in which they should take effect, so you know that you need to write the translate command in your code first and follow it with the rotate command.

Now let's use the local coordinate system approach. In this case, think about moving the object and its local coordinate system away from the origin; then, the rotate command is carried out using the now-translated coordinate system. With this approach, commands are issued in the order in which they're applied, so once again the translate command comes first. Thus, the sequence of transformation commands to produce the desired result is

glTranslatef(0.0, 0.0, -5.0); glRotatef(90.0, 0.0, 1.0, 0.0); If you're having trouble keeping track of the effect of successive matrix multiplications, try using both the fixed and local coordinate system approaches and see whether one makes more sense to you. Note that with the fixed coordinate system, rotations always occur about the grand origin, whereas with the local coordinate system, rotations occur about the origin of the local system. You might also try using the

**gluLookAt()**utility routine described next.**Using the gluLookAt() Utility Routine**Often, programmers construct a scene around the origin or some other convenient location and then want to look at it from an arbitrary point to get a good view of it. As its name suggests, the

**gluLookAt()**utility routine is designed for just this purpose. It takes three sets of arguments, which specify the location of the viewpoint, define a reference point toward which the camera is aimed, and indicate which direction is up. Choose the viewpoint to yield the desired view of the scene. The reference point is typically somewhere in the middle of the scene. (If you've built your scene at the origin, the reference point is probably the origin.) It might be a little trickier to specify the correct up-vector. Again, if you've built some real-world scene at or around the origin and if you've been taking the positive*y*-axis to point upward, then that's your up-vector for**gluLookAt()**. However, if you're designing a flight simulator, up is the direction perpendicular to the plane's wings, from the plane toward the sky when the plane is right-side-up on the ground.The

**gluLookAt()**routine is particularly useful when you want to pan across a landscape, for instance. With a viewing volume that's symmetric in both*x*and*y*, the (*eyex, eyey, eyez*) point specified is always in the center of the image on the screen, so you can use a series of commands to move this point slightly, thereby panning across the scene.void

**gluLookAt**(GLdouble*eyex*, GLdouble*eyey*, GLdouble*eyez*, GLdouble*centerx*, GLdouble*centery*, GLdouble*centerz*, GLdouble*upx*, GLdouble*upy*, GLdouble*upz*);Defines a viewing matrix and multiplies it to the right of the current matrix. The desired viewpoint is specified by

*eyex*,*eyey*, and*eyez*. The*centerx*,*centery*, and*centerz*arguments specify any point along the desired line of sight, but typically they specify some point in the center of the scene being looked at. The*upx*,*upy*, and*upz*arguments indicate which direction is up (that is, the direction from the bottom to the top of the viewing volume).In the default position, the camera is at the origin, is looking down the negative

*z*-axis, and has the positive*y*-axis as straight up. This is the same as callinggluLookAt(0.0, 0.0, 0.0, 0.0, 0.0, -100.0, 0.0, 1.0, 0.0); The

*z*-value of the reference point is –100.0, but could be any negative*z*, because the line of sight will remain the same. In this case, you don't actually want to call**gluLookAt()**, because this is the default (see Figure 3-11) and you are already there. (The lines extending from the camera represent the viewing volume, which indicates its field of view.)**Figure 3-11**Default Camera Position

Figure 3-12 shows the effect of a typical

**gluLookAt()**routine. The camera position (*eyex, eyey, eyez*) is at (4, 2, 1). In this case, the camera is looking right at the model, so the reference point is at (2, 4, –3). An orientation vector of (2, 2, –1) is chosen to rotate the viewpoint to this 45-degree angle.**Figure 3-12**Using gluLookAt()

Therefore, to achieve this effect, call

gluLookAt(4.0, 2.0, 1.0, 2.0, 4.0, -3.0, 2.0, 2.0, -1.0); Note that

**gluLookAt()**is part of the Utility Library, rather than the basic OpenGL library. This isn't because it's not useful, but because it encapsulates several basic OpenGL commands—specifically,**glTranslate*()**and**glRotate*()**. To see this, imagine a camera located at an arbitrary viewpoint and oriented according to a line of sight, both as specified with**gluLookAt()**and a scene located at the origin. To "undo" what**gluLookAt()**does, you need to transform the camera so that it sits at the origin and points down the negative*z*-axis, the default position. A simple translate moves the camera to the origin. You can easily imagine a series of rotations about each of the three axes of a fixed coordinate system that would orient the camera so that it pointed toward negative*z*-values. Since OpenGL allows rotation about an arbitrary axis, you can accomplish any desired rotation of the camera with a single**glRotate*()**command.**Note -** You can have only one active viewing transformation. You cannot try to combine the effects of two viewing transformations, any more than a camera can have two tripods. If you want to change the position of the camera, make sure you call **glLoadIdentity()** to erase the effects of any current viewing transformation.

**Nate Robins' Projection Tutorial**If you have Nate Robins' suite of tutorial programs, run the

**projection**tutorial. With this tutorial, you can see the effects of changes to the parameters of**gluLookAt()**.**Advanced**To transform any arbitrary vector so that it's coincident with another arbitrary vector (for instance, the negative

*z*-axis), you need to do a little mathematics. The axis about which you want to rotate is given by the cross product of the two normalized vectors. To find the angle of rotation, normalize the initial two vectors. The cosine of the desired angle between the vectors is equal to the dot product of the normalized vectors. The angle of rotation around the axis given by the cross product is always between 0 and 180 degrees. (See Appendix E for definitions of cross and dot products.)Note that computing the angle between two normalized vectors by taking the inverse cosine of their dot product is not very accurate, especially for small angles, but it should work well enough to get you started.

<a name="Heading24" id="Heading24">

**Creating a Custom Utility Routine****Advanced**For some specialized applications, you might want to define your own transformation routine. Since this is rarely done and is a fairly advanced topic, it's left mostly as an exercise for the reader. The following exercises suggest two custom viewing transformations that might be useful.

**Try This**(these are rotation angles of the plane relative to its center of gravity).*Suppose you're writing a flight simulator and you'd like to display the world from the point of view of the pilot of a plane. The world is described in a coordinate system with the origin on the runway and the plane at coordinates (*, and [i]heading*x, y, z*). Suppose further that the plane has some*roll*, [i]pitch- Show that the following routine could serve as the viewing transformation:

void pilotView{GLdouble planex, GLdouble planey, GLdouble planez, GLdouble ro

## 0 Comments

Note: GameDev.net moderates article comments.