Frustum culling with quadtrees (or something similar)

Started by
11 comments, last by Antar3s 13 years, 6 months ago
Hi!

I'm a beginner, and it's the first time I develop a simple first person game featuring view frustum culling and a scene tree (a quadtree, exactly).

So, I coded the tree, the methods to traverse and to test if a bounding box overlaps the frustum.
Now I have to move the camera, rotating and translating it like in a common first person game.

And that's the problem!

I know that the best way to rotate and translate the camera consists in rotating and translating the world in the opposite direction.
But what about position of each object in the world?

To test if some part of the world overlaps the frustum after a movement (a rotation or a translation, is the same) I need to update position for each object (the position of each vertex for each bounding box) or the position of the camera.

Which is the right way?
I read many tutorials about frustum culling, but I didn't found anything about objects/camera position update.

Thanks for your help.

[Edited by - Antar3s on October 20, 2010 2:02:54 AM]
Advertisement
The best way to move the camera is to... wait for it... move the camera.
Where you got the opposite I haven't a clue, but for sure, it is not better to move objects. Case in point, a scene with 2 objects and 1 camera, to move the camera you could A) move it, resulting in updating 1 position, or B) Move 2 objects, resulting in 2 position updates.

The view transform handles transforming world coords to view space coords. It should be rebuilt every time the camera is moved or reoriented. The frustum is a representation of the bounding volume of view space; what the camera sees. So if you move the camera, you move the frustum. The world space position of each object in the scene remain static, unless they themselves are moving.



Quote:Original post by Burnt_Fyr
The best way to move the camera is to... wait for it... move the camera.
Where you got the opposite I haven't a clue, but for sure, it is not better to move objects. Case in point, a scene with 2 objects and 1 camera, to move the camera you could A) move it, resulting in updating 1 position, or B) Move 2 objects, resulting in 2 position updates.


So, I was right when I thought that updating each object position is absurd...
I wonder why many tutorials suggest to rotate and translate camera applying transformations to the model view matrix (I'm using OpenGL).

Anyway, I actually translated camera without problems, simply modifying its position when the W, A, S, D keys are pressed.
But I still have problems with rotation. How can I rotate camera around X and Y axis?
I tried changing look at vector coordinates according with an angle, but the result was terrible.

Could you suggest me a good solution?

Of course, thanks for your answer.
Quote:Original post by Antar3s
So, I was right when I thought that updating each object position is absurd...
I wonder why many tutorials suggest to rotate and translate camera applying transformations to the model view matrix (I'm using OpenGL).
I'm not sure what you mean by that, but I doubt the tutorials you're referring to were suggesting moving each object manually or doing anything else similarly ill-advised.

In most cases, any difference between 'moving the world' and 'moving the camera' is purely conceptual. Generally speaking, you don't create the effect of motion through a 3-d environment by updating the positions and orientations of every object in the simulation manually, and I doubt any tutorial would suggest that (or at least it would be uncommon to do so).

As far as the graphics pipeline is concerned, basically what happens is that the geometry goes through a transformation or series of transformations that takes it from the local space in which it's defined all the way to clip space. Where the camera appears to be in the world is simply a product of how this transform or series of transforms is set up.
the model view matrix (i'm assuming GL from this point on) is a concatenation of the Models world transform and the cameras inverse world transform. The Model's world transform takes the local position of an object, places it in the world, and the view transform(cameras inverse world transform) takes a position in worldspace, and places it in view/camera/eye space.

Your attempt at a solution(via changing the lookat position) is a usable method, so your most likely fudging the process somewhere along the lines. Could you post a short piece of code showing how your attempting to modify the modelview matrix, so we can see where you went astray?
Ok, here's some code.

This is my application main loop.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);glMatrixMode(GL_MODELVIEW);glLoadIdentity();this->cam->setView();set<RenderObject*> renderObjects;this->sceneTree->cullRenderObjects(&renderObjects);set<RenderObject*>::iterator iter = renderObjects.begin();for (uint i = 0; i < renderObjects.size(); i++)	(*iter++)->render();glutSwapBuffers();


As you see, I defined a set which is filled with the objects to render, according with frustum boundaries. (Note that it's the first time I realize something like quadtrees, so it is probably not optimized.)

The Camera::setView function is the following.
gluLookAt(this->position.x, this->position.y, this->position.z, 		      this->atPoint.x, this->atPoint.y, this->atPoint.z, 		      this->yAxis.x, this->yAxis.y, this->yAxis.z);

My main question is how to rotate and translate camera (via keyboard and mouse) without updating position for each object in the world (position that I test continuously to determine what is visible and what is not).
Should I move the camera (updating its position and its axes with a mathematical approach) or should I define some GL transformation which I don't understand, becaus of my lack of experience?
I'm not familiar with GL, but i'll give it a shot.

Have you tried to rotate around only one axis, and does this provide what you would expect? Is it a radian/degrees issue?

I'm going to assume that your rotating around global axii and that rotating on both x and y is the issue. here is what i'm currently doing:

class camera {   Vec3 position;   Vec3 forward;   Vec3 up;   Vec3 right; // this is the x axis in a left handed system,    void rotatex(float radians) {      matrix m;      m.ARotation(&right, radians); // create an axis/angle rotation matrix,                                    // using the local xaxis.      forward.transform(&m); // transform the forward and up vectors only      up.transform(&m);      // because the Right/Left axis doesn't change   }}

This is just psuedo but GL will have equivalent functions somewhere.

void Camera::SetView() {   Vec3 lookat = position + forward;   gluLookAt(position.x,position.y, position.z,             lookat.x,  lookat.y,   lookat.z,             up.x,      up.y,       up.z);}


EDIT: the code above shows rotation on the xaxis, for the y axis you would rotate the forward and right vectors, and use the yaxis for a rotation.
I'm afraid my task is a bit more complicated, but you grasped the problem.
I have to realize a camera for a first person game; I can rotate it around one axis, but not around X and Y at the same time, especially if I set the mouse to control these rotations.

Exactly, my camera must:
- move straight forward and backward, at a constant height (like walking);
- strafe, at a constant height too;
- pitch in the range of about a straight angle (like looking up and down);
- yaw with no limits.

Now, while first two movements work well, rotations are driving me mad.
I looked for some help from the OpenGL, GLU and GLUT libraries with no success, so I'm going to try something based on rotation matrix.
I hope it will work.

--- EDIT ---

I've just tried with rotation matrices and I'm disappointed!

Rotations work well, separately, but when I set them to work together the result is terrible.
My camera (controlled by mouse) tends to roll, though there's no rotation around Z axis.
I can't understand why it happens!

Here's source code.
void Camera::rotate(float yawAngle, float pitchAngle){	Vector newZAxis;	float zOffsetAngle = (this->zAxis.x < 0.0f) ? this->currentYaw : -this->currentYaw;		if (abs(this->currentPitch + pitchAngle) > MAX_PITCH)		pitchAngle = 0.0f;		this->currentPitch += pitchAngle;	this->currentYaw   += yawAngle;	newZAxis = this->zAxis.rotateY(zOffsetAngle);	newZAxis = newZAxis.rotateY(yawAngle);	newZAxis = newZAxis.rotateX(pitchAngle);	newZAxis = newZAxis.rotateY(-zOffsetAngle);	this->zAxis   = Vector::normalize(newZAxis);	this->yAxis   = Vector::normalize(this->zAxis.computeCrossProduct(this->xAxis));	this->xAxis   = Vector::normalize(this->yAxis.computeCrossProduct(this->zAxis));	this->atPoint = this->position + this->zAxis;}


And these are my rotation functions.
Vector Vector::rotateX(float degrees){	Vector rotatedVector;	float angleSin = sinf(toRadians(degrees));	float angleCos = cosf(toRadians(degrees));	rotatedVector.x = this->x;	rotatedVector.y = this->y * angleCos - this->z * angleSin;	rotatedVector.z = this->y * angleSin + this->z * angleCos;	return rotatedVector;}Vector Vector::rotateY(float degrees){	Vector rotatedVector;	float angleSin = sinf(toRadians(degrees));	float angleCos = cosf(toRadians(degrees));	rotatedVector.x = this->x * angleCos + this->z * angleSin;	rotatedVector.y = this->y;	rotatedVector.z = this->x * -angleSin + this->z * angleCos;	return rotatedVector;}Vector Vector::rotateZ(float degrees){	Vector rotatedVector;	float angleSin = sinf(toRadians(degrees));	float angleCos = cosf(toRadians(degrees));	rotatedVector.x = this->x * angleCos - this->y * angleSin;	rotatedVector.y = this->x * angleSin + this->y * angleCos;	rotatedVector.z = this->z;	return rotatedVector;}


[Edited by - Antar3s on October 22, 2010 8:27:36 AM]
I glanced through your code, but rather than trying to debug it or fix it, I'll just describe how I think you'll probably want to go about this.

For this type of motion, an optimal representation is (IMO) a yaw angle, a pitch angle, and a position vector. Just those three things.

Each update, you grab the mouse deltas (or whatever), apply any scaling/smoothing/etc. that you want, and then use them to update the yaw and pitch variables. Although it's probably not strictly necessary, you'll generally want to keep the yaw value wrapped to the range [0, 2pi). The pitch should be clamped to the desired range, e.g. [-pi/2, pi/2].

For linear motion, the forward direction vector can be computed directly from the yaw angle using trig; the side direction vector will then be one of the two vectors perpendicular to this vector and lying in the ground plane. Once you have these direction vectors, you can scale them by the desired speed and the current time delta and update the position vector as appropriate.

The other part of the problem is building and uploading the view matrix. There are several ways to do this, but I'll go ahead and provide an example using (deprecated) convenience functions such as glTranslate*() and so forth.

For a 'forward' (e.g. model) transform, the order of transforms would be pitch, then yaw, then translation, which in terms of the aforementioned convenience functions might look like this:
glTranslatef(pos.x, pos.y, pos.z);glRotatef(yaw, 0, 1, 0);glRotatef(pitch, 1, 0, 0);
For a camera, we want to invert this transform, so we'd instead write:
glRotatef(-pitch, 1, 0, 0);glRotatef(-yaw, 0, 1, 0);glTranslatef(-pos.x, -pos.y, -pos.z);
There are a few other factors that might need to be considered (depending on how you're handling things), but that's the basic idea.
Yes, that was my very first idea, but it creates problem with frustum culling.
I still need to track camera position and orientation to compute frustum coordinates and select visible objects from the entire world.
If you have any idea to solve this, it would be great.

This topic is closed to new replies.

Advertisement