Need help understanding some pseudo-code (Solved)

Started by
2 comments, last by Zakwayda 19 years, 1 month ago
This is an excerpt from this page clicky
Quote: The Rendering Algorithm The pseudo-3D scene is rendered by classifying the eyepoint with respect to the root segment, recursively drawing all segments on the other side of that segment, drawing that segment, and then recursively drawing all segments on the same side of that segment. If the eyepoint intersects a segment, we're seeing it edge-on, so it isn't drawn. Because each segment is visited exactly once while drawing the scene, the scene can be rendered in O(n) time. Here is pseudocode for a method on a BSP tree node class that implements this algorithm: draw3DScene() if location(eye.point) == frontSide back.draw3DScene() drawPolygon() front.draw3DScene() else if location(eye.point) == backSide front.draw3DScene() drawPolygon() back.draw3DScene() else front.draw3DScene() back.draw3DScene() When we're ready to draw the scene, the eyepoint and look vector are used to determine the coordinate system for the camera space. Then each polygon is drawn by using the folowing algorithm: drawPolygon() transform segment endpoints to camera space clip segment to view frustrum convert segment to polygon: set the width to the x values scaled down by the y values set the height to a constant value scaled down by the y values
I understand the draw3DScene() function, Its the drawPolygon() function I don't understand at all (what does it mean "transform segment endpoints to camera space"?, etc.). If it helps, the "BSP trees" link at the top of the page takes you to the Java applet the pseudo-code is describing. [Edited by - Alt F4 on March 16, 2005 9:40:17 PM]
Advertisement
That little java app is pretty cool! That's a very cool demonstration of bsp trees and raycasting...

Anyway, the 'draw polygon' function transforms the segments (walls) into camera space, and then projects them into screen space. Taking it one step at a time:

The 'camera space' part is a little complicated to describe if you're not familiar with matrices, transformations, etc. Briefly, in the 2.5d case the transformation into camera space is a translation by the negative of the camera's position, followed by a rotation by the negative of the camera's rotation. This re-aligns the world so that the camera is at the origin and looking directly down the y axis, which is necessary for the next step, projection.

The idea with projection is simply that parallel lines appear to converge as distance from the viewer increases. We can achieve this effect by dividing the horizontal and vertical components (x and height in this case) by the distance from the camera (y).

'Hope some of that helps.
I'm a little familiar with matrices and the like, so I think I get what your saying. So before rendering, you take the two endpoints of the segment and multiply them by an inverse translation matrix like this.

[x,y,1] * [1,0,0]
----------[0,1,0]
----------[-A,-B,1]

where x and y are the coordinates of an endpoint and then A and B would be the camera's coordinates. Then you multiply the new coords of the segment by an inverse rotation matrix like this?

[x,y] * [cos(@), sin(@)]
--------[-sin(@), cos(@)]

Then you can do the scaling stuff and draw to the screen. I think that's right.

Edit: Ignore the dashes. I had to do that to make the formatting right.

Edit 2: Thanks for the help, jyk! I was thinking of positive rotation as being to the left but ,yea, you're right, its all relative.

[Edited by - Alt F4 on March 16, 2005 9:08:15 PM]
That looks right to me. The only thing I can't tell from looking at your example is whether the inverse rotation matrix is correct, since it's dependent on which direction you define positive rotation to be. But as long as you know that you can invert the rotation matrix by either transposing it or negating the angle, you should be good to go.

This topic is closed to new replies.

Advertisement