Viewing Frustrum, Far and Near Clipping etc....

Started by
8 comments, last by paulcoz 24 years, 3 months ago
Hi, I just posted this in ''Article Requests''. It is probably more suited to this message board. I would like to see a tutorial which explains how you calculate which objects in your 3D world are visible from a specified viewpoint (or rather how to determine which objects should not be rendered because they are outside the viewing frustrum) and then display them on the screen. I have read and understood a lot of tutorials on translation, rotation, perspective, projection etc.. using matrices, however I am yet to come across a really simple tutorial which shows you how to take your perspective-projected coordinates and get them up on the screen. This tutorial could also cover the scaling of your final coordinates to the window (viewport transformation?). If anybody knows where I can find some easy-to-understand information about these topics either in a book or on the net, please reply to this message. Thanks, Paulcoz.
Advertisement
DX has a ComputeSphereVisiblity function that can be used to cull whole groups of polys if you have the center of the polys and a radius that contains them. It''s very simple to use and sends back data telling if the polys are inside, outside, or on the edges of the viewing frustum. If you''re not using DirectX, then role your own.
--Shannon Schlomer, BLAZE Technologies, Inc.
It should be rather obvious that the sooner you can remove an object from your pipeline, the faster it will run. A technique that has worked well for me is computing the object's maximuim radius when the object is loaded (by simply testing the distance of each vertex). Then, given only your object's world position and the camera's viewing volume, you can test the object against the edges. This way, you can even bypass the Local to World transformation for unseen objects. I hope this is a detailed enough description for you.

Edited by - I-Shaolin on 1/10/00 12:53:19 AM
I-Shaolin,

I understand what are saying (the theory) however I am stuck at the point where I try and implement the camera''s viewing volume.

All of the objects in the 3D world have known coordinates (x,y,z) whereas to me these planes ''far'' and ''near'' are not defined so easily. I know that after applying a projection with a matrix an object looks smaller if it is in a different z plane) but my knowledge is non-existent when it comes to actually representing this plane and creating the viewing volume with a plane at near and far.

If you know where I can obtain a really concise explanation on this - a book for example, say so. I would be interested in knowing where you all learnt this kind of stuff.

Another question: If you do your clipping in 3D do you still have to do it in 2D, or the other way around - if you are going to do 2D clipping (on the screen - Sutherland-Hodgeman say) do you still have to do 3D clipping?
I''ll start off with the clipping question first, since it is the easiest to answer, and the answer is yes. Let me explain to you the clipping pipeline (so to say).

The first step is to remove any objects from your scene as quickly as possible. This is what I explained above. At this point, you are ONLY concerned with objects as a whole. They are either accepted or rejected.

After that, you can perform your Local to World transformation on any objects that are left. At this point, you can perform your backface culling to remove some of the individual polygons of your objects. This will remove all the polygons facing away from the camera. (Note: Culling isn''t really clipping, but...)

Then, you can perform the World to Camera transformation. At this point, we need to clip each individual polygon against the viewing volume. The reason we do this is becuase when we clipped our objects before, we simply checked to see if "any" part of the object could theoretically be seen from the camera. Now we are gong to clip each polygon this way. When we are done, we''ll be left with a list of polygons that are at least partially visible.

Finally, we perform our perspective projection. After that, we then using a polygon clipping algorithm such as Sutherland-Hodgeman. What you will be left with is a nice polygon list with all the polygons you need to draw.

Now, I guess I should say that this isn''t the only way to do this. The two ideas behind clipping are...

1. Remove as much as you can as quickly as possible from your pipeline.
2. Make sure all your polygons fit on the screen before you draw them so you won''t crash the computer.

There is a such thing as full 3D clipping, where the objects are completely clipped against the the viewing volume, but this is a mathematical bitch and it just really isn''t practical in games.

As far as full 2D clipping, it''s possible but you need to be careful. A problem occurs after you project you verticies. When you start thowing away the Z value, you''ll be left with some verticies that are behind the camera. You can even cause a crash here due to the division in the projection calculation.

There are a lot of good books out there, and there are twice as many bad ones.

My favorite and the one that I started with is "Black Art of 3D Game Programming." by Andre Lamothe. It''s a bit dated now, but the last half of the book is devoted to teaching 3D to people who have never seen it.

Another popular book is "Computer Graphics: Priciples and Practice" by Foley and his posse. I''ll warn you, this book reads like an automotive manual, and the organization is questionable. I don''t recommend it for learning 3D, but it''s a good reference and something you''ll eventually grow in to.

Damn, I just realized how much I have written. Let me look up a couple things, and I''ll explain to you how to do the object clipping.
Well, let me continue my discussion from a few minutes ago. Given an object's world position and a camera's world position, how do you clip an object?

I guess I was a little too vague before and I apologize. Here is an algorithm that is easy to understand, although not as elegant as it sounded above.

First off, you need to know the object's maximum radius. The best way to do this is compute the radius when you load the object and store it with the object. It's just an expensive operation and you only want to perform it once. Here's a function...

void CObject3D::ComputeRadius()
{

// Reset Our Object's Radius
m_fRadius = 0.0f;

// Used To Store Each Computed Radius
float fNewRadius = 0.0f;

// Use To Hold Each Vertex
float x,y,z;

// Test Each Vertex In The Object
for (int i = 0; i < m_nNumVerticies; i++)
{

// Get The next Vertex
x = m_pLocalVerticies.x;
y = m_pLocalVerticies.y;<br> z = m_pLocalVerticies.z;<br><br> // Compute The Distance To The Vertex<br> fNewRadius = (float) sqrt(x*x + y*y + z*z);<br><br> // Finally, See If The New Radius Is Greater Than The Old One<br> if (fNewRadius > m_fRadius)<br> m_fRadius = fNewRadius;<br><br> }<br>}<br><br>Okay, I am going to assume that is enough of an explination on that. Let's get back to clipping.<br><br>You can attempt to perform all of you object clipping in World space by attempting to build your arbitrary clipping planes, but I think I have a much easier way for you to check.<br><br>For a given object, take its world position and perform a World to Camera transformation on it. This will give you the object's center point in Camera space. Now using this point, you can test it against the viewing volume (don't forget about the radius).<br><br>We'll do the following steps.<br><br>1. Test it against the Hither plane.<br>2. Test it against the Yon plane.<br>3. Test it against the four side planes.<br><br>The first two should be trivial. For the Hither plane, add the object's radius to the Z value of the point and test it against the Z value of the Hither plane. If it's greater, go on to the Yon plane; otherwise, reject the object. For the Yon plane, subtract the object's radius from the Z value of the point and test it against the Z value of the Yon plane. If it's less, then the object could still be visible.<br><br>If you have made it this far, we need to test it against the other planes, but they are a little more difficult since they're not perpendicular to the screen. This isn't as bad as it sounds though. Here are some formulas that will determine if a given point is IN the viewing volume.<br><br>x <= (ScreenWidth * z)/(2 * ViewingDistance)<br>x >= (-ScreenWidth * z)/(2 * ViewingDistance)<br><br>y <= (ScreenHeight * z)/(2 * ViewingDistance)<br>y >= (-ScreenHeight * z)/(2 * ViewingDistance)<br><br>Amd just for good messure…<br>z > Hither<br>z < Yon<br><br>If all of these tests pass, then a point is inside the viewing volume. I didn't include the radius in these tests becuase if you understand how these work, you can easily add the radius in when needed.<br><br>Here are a couple notes on all of this. You can use these same tests later on when you are trying to clip full polygons by checking each point with these tests. Also, since more objects will be clipped than not in a given scene, it's better to write your code to take advantage of this by using negitive logic. That way, the function won't take as long.<br><br>I really hope this helps. If you don't mind, reply back to the message to let me know if this explains things well enough. </i> <br><br>Edited by - I-Shaolin on 1/10/00 4:47:13 AM
I-Shaolin, thanks for your description.
Sorry I am taking so long to pick up on this stuff.

I think I will have a better understanding if you can answer these questions (I told you I need to do some more reading on the theory):

1(a) What is the purpose of the Local to World Transformation? 1(b) What does this do to the Local coordinates? 1(c) Is the Local origin just the centre point (0,0,0) in Local coordinates?

2(a) What is the purpose of the World to Camera Transformation? 2(b) What does this do to the World coordinates?

3 If I understood your description the ''far'' and ''near'' planes are represented in Camera space, right?

I think I will understand the processes you described better if you explain these to me.

I guess these concepts are discussed in the Andre Lamoth book you mentioned, Paulcoz.
The reason we perform a Local to World transformation is becuase normally, objects are defined around a local origin. This was, certain operations like scaling and rotations work correctly. Also, it can help in defining an object. Take a cube for example. It''s rather easy to describe a cube when it''s centered at the origin.

The local origon (0,0,0) is more conceptual than anything else. It just helps you visualize what''s going on.

When you perform the Local to World transformation, you are basically adding your object''s world position to each vertex in your object. This will translate your object into the world space correctly.

The Camera transformation is quite a bit more complex. What this step does is position every object in the world relitive to the camera. This way we can start thinking about drawing objects onto the screen. I don''t want to go into too long of a discussion on this, because it would probably just confuse you more than anything else. It took me a while to really understand this. For now, just understand that the reason you persorm this transformation is becuase you have to figure out how each of your objects will appear from the camera''s position.

For your last question, the far and near planes are represented in Camera space. In fact, they are really just Z values away from the camera. The near plane is really needed because it can make sure that objects aren''t behind you or that the camera isn''t inside them (so to say). The far plane isn''t really needed, but it helps get rid of objects in our scene that are too far away to really distinguish.

A side note here. This is where fog comes in. By adding fog, you can have objects be completely covered with fog just before that are clipped. That way, you don''t see an object just disappear suddenly.

On a side note, I like how you are trying to understand 3D here. Trust me on this. Make yourself understand what is happening, completely. It can be a lot of work, but that''s what seperates a lot of wanabes from professionals. Once you understand what is happening and why, you can better understand mistakes, different libraries, and how to speed things up.

O.k. I promise these will be my last questions on this topic. If you read this I-Shaolin, please clarify this for me:

(1) Is the World to Camera transformation you mentioned just another way of saying ''perspective'' transformation.

Eg. the matrix [1,0,0,p]
[0,1,0,q]
[0,0,1,r]
[0,0,0,1]

where p, q or r are non-zero values (and are usually the value 1 / distance from axis), OR are they different transformations altogether.

(2) If they are not the same transformation can you tell me where you learnt all about this ''World to Camera'' stuff, so I can go and look it up. I know you said it would confuse me, but I think this is the part that is missing from my program.

Thanks,
Paulcoz.
The World transformation and the Camera transformation are completely different. The World transformation moves all of your objects in their correct place in the world. The Camera transformation moves all of the world objects so you can see them from the view of the camera.

The books I mentioned above are what I used, but since talking to you, I decided to write some articles for GameDev. Once they are done, they should help you and others out.

This topic is closed to new replies.

Advertisement