How does a coordinate end up on screen

Started by
3 comments, last by MessageBox 20 years, 6 months ago
I don''t fully understand how coordinates specified in object space end up in their positions on screen. I know the modelview matrix puts them in camera space but I don''t know how the projection and perspective? matrices assist in the transformation process.
quote: This is what I got from an opengl FAQ: 011 How are coordinates transformed? What are the different coordinate spaces? Object Coordinates are transformed by the ModelView matrix to produce Eye Coordinates. Eye Coordinates are transformed by the Projection matrix to produce Clip Coordinates. Clip Coordinate X, Y, and Z are divided by Clip Coordinate W to produce Normalized Device Coordinates. Normalized Device Coordinates are scaled and translated by the viewport parameters to produce Window Coordinates.
Would someone mind showing me the math behind how a vertex (1.0, 0.0. 0.5) for example ends up in a certain location on the screen. And when using a perspective projection how does the viewing frustum consist of planes at x,y,z=-1/1 when the frusum is not aligned along the axes. Ie. it extends out based on the FOV angle.
Advertisement
Essentially, the Projection Matrix, deforms the frustrum you are expecting, into a Cuboid (the perspective is introduced here, henceforth the coordinates are essentially orthographic). During normalisation, this cuboid is transformed into the Cube you describe.
Hey,

There''s a most excellent OpenGL tutorial that will illustrate much of this for you, from Nate Robins:

http://www.xmission.com/~nate/tutors.html

You''re looking for the projection tutorial. It doesn''t illustrate everything, but it will show you the effect of different perspective projections. In one window, it shows what you see onscreen given the projection. In another window, it shows you (from a different camera) what the view frustum looks like, where the camera is, and where the object sits within the frustum.

Graham Rhodes
Senior Scientist
Applied Research Associates, Inc.
Graham Rhodes Moderator, Math & Physics forum @ gamedev.net
Ultimately a ray is cast from the position of the camera to the world coordinate, and where that ray intersects the viewing plane is where the vertex ends up on screen. Then, gl interpolates from that vertex to the next vertex incorporating color and texture info at each pixel
Why don't alcoholics make good calculus teachers?Because they don't know their limits!Oh come on, Newton wasn't THAT smart...
You have a frustrum with a viewing plane. behind the viewing plane you have a view point, the tip of the frustrum cone, or the camera. If you look at it in 2D space you can see similar triangles--the space between the camera (the origin) and the viewing plane, and the space between the camera and far clippign plane.Let d be the distance from the camera to the viewing plane. Using the properties of similar triangles you can see that the ratio of d/y = z/y0. y0 is the y component of the vertice in question that you want to project. And y is, u guessed it, the coordinate on the viewing plane, or virtual screen. solve for y and you get d*y0/z. You do the same to find the x screen coordinate. Given camera point [xc,yc,zc, 1], you can calculate the screen coord by multiplying by this matrix:
[xs,ys,0,1] =

[xc,yc,zc, 1] *
[d/z 0 0 0]
[0 d/z 0 0]
[0 0 1 0]
[0 0 0 1]

This topic is closed to new replies.

Advertisement