Sign in to follow this  
Tesserex

OpenGL Quaternion camera, _translation_ problem

Recommended Posts

I learned all about quaternions yesterday and successfully implemented them in OpenGL for my camera. I can spin about just wonderfully now with no gimbal lock. I'm making a ship-flying type game, and so obviously I want to be able to move the ship forward, in the local z direction. My code keeps track of the rotation quaternion, and with rotations I just have a small pitch or roll angle quaternion to multiply with and get the new rotation. To keep track of my world position, I have xpos, ypos, zpos. My problem seems to be getting the correct view vector out of the quaternion. My initial vector, for movement, is (0, 0, 1) because I want to move in Z. To rotate it with the quaternion, I looked up how to turn the quat into a rotation matrix. Multiplying that by (0,0,1) gives the third column of the matrix. These (I think) should be the new view vector, which I should add to my world position. Here's my code:
if (KeyDown(VK_SPACE))
	{
		xpos += 2*rotation.x*rotation.z - 2*rotation.w*rotation.y;
		ypos += 2*rotation.y*rotation.z + 2*rotation.w*rotation.x;
		zpos += rotation.w*rotation.w - rotation.x*rotation.x - rotation.y*rotation.y + rotation.z*rotation.z;
	}

...
glTranslatef(0.0f,0.0f,-6.0f);	// Move Into The Screen

// draw the ship here
	
glRotatef(114.6*acos(rotation.w),rotation.x,rotation.y,rotation.z);

glTranslatef(xpos,ypos,zpos);


From the start, when I move into Z, it works. When I pitch myself up and move into Y, it works. When I roll to one side and then pitch myself into the X, it moves in Y instead. I tried a random guess fix and swapped the + and - signs in the xpos and ypos formulae, to get the bottom row of the matrix instead. This actually fixed the x problem, so now movement on all three axes moved correctly by themselves. The big problem is still that this movement doesn't work. It seems to at first, but after a bit of flying around, I start sliding sideways, backwards, down, some combination of them, etc. Is my math wrong, or something else in my code?

Share this post


Link to post
Share on other sites
glRotate isn't going to be useful in this case. In fact, glRotate is rarely useful at all. With quaternions and OpenGL, you'll usually be constructing a matrix from the quaternion. In this case, since you are just doing a camera, it's even simpler because gluLookAt will multiply the appropriate matrix for you, without having to do it by hand.

All you need to do is write a method that rotates a vector by a quaternion. Use that to rotate {0, 0, 1} and {0, 1, 0} to get your appropriate forward and up vectors, and just plug those directly into gluLookAt(position, position + forward, up);

Share this post


Link to post
Share on other sites
Ok, that took a few times reading through, but I think I get it. Instead of rotate, do the use the entire quaternion->rotation matrix and multiply in my own method. Then use it on those two vectors, one for my forward flight direction and the other for my ship's roll / up direction. Use that for the camera instead of translating.

Will that fix my flight problem? I'm assuming I then take my transformed forward vector and add some speed multiple of it to my position vector.

Share this post


Link to post
Share on other sites
Quote:
Original post by Tesserex
Ok, that took a few times reading through, but I think I get it. Instead of rotate, do the use the entire quaternion->rotation matrix and multiply in my own method. Then use it on those two vectors, one for my forward flight direction and the other for my ship's roll / up direction. Use that for the camera instead of translating.

Will that fix my flight problem? I'm assuming I then take my transformed forward vector and add some speed multiple of it to my position vector.
Looking at the code you posted earlier:

glTranslatef(0.0f,0.0f,-6.0f);	// Move Into The Screen

// draw the ship here

glRotatef(114.6*acos(rotation.w),rotation.x,rotation.y,rotation.z);

glTranslatef(xpos,ypos,zpos);

I'm unclear as to whether this is intended to be a camera or object transform, but in either case the sequence of transforms appears to be incorrect.

Can you clarify the purpose of the transform? You mentioned that this was for a camera, but the comment 'draw the ship here' seems to indicate otherwise.

Anyway, as mentioned, when working with a rotation in quaternion form it's typical to convert it to a matrix before submitting it to OpenGL. Furthermore, the direction vectors can be extracted directly from this matrix; there's no need to perform additional vector rotations to derive these vectors.

As for 114.6, that is one 'magic number' :) I see what you're doing (converting to degrees and multiplying by two), but it would be far better to write a simple utility function to perform the conversion (even then using a named constant rather than 57.x), and then include the factor of 2 directly in the expression.

However, although it looks like this method should work, again, it would probably be better to use a matrix. Your method relies on the specifics of how a quaternion is used to represent a rotation; although it's important to understand these specifics, code that works with quaternions should treat them more like a 'black box'. In short, you should avoid mucking around with the quaternion elements directly in most cases.

If you can clarify what the purpose of the transform you posted is, we can probably point out more specifically where the code might be in error.

Share this post


Link to post
Share on other sites
Ok, for the purpose. Think Star Fox. Third person view. The ship is in the center of the screen, at it's location. The camera is a fixed distance directly behind it. So the first set of commands moves the camera back from the ship and draws the ship. Then we spin the world around the ship to orient it, then move the ship to it's location in space. Because the ship was already drawn, it's tied to the camera and they move together.

For now, though, I've removed that part and am going to get first person flying working first. It has the same problems though. I implemented some matrix stuff this time, and now my problems are backwards. The moving seems to work, but the rotating around is busted. Here's my new code:


Vector Transform(Vector v)
{
Vector nv;
nv.x = v.x*(w*w + x*x - y*y - z*z) + v.y*(2*x*y - 2*w*z) + v.z*(2*x*z + 2*w*y);
nv.y = v.x*(2*x*y + 2*w*z) + v.y*(w*w - x*x + y*y - z*z) + v.z*(2*y*z - 2*w*x);
nv.z = v.x*(2*x*z - 2*w*y) + v.y*(2*y*z + 2*w*x) + v.z*(w*w - x*x - y*y + z*z);
return nv;
}


You can probably tell, this just transforms any vector by the rotation matrix derived from the quaternion. This function is a member of my quaternion class.

Here's the keypress stuff...

if (KeyDown(VK_LEFT))
{
rotation.Multiply(rollmq);
up = rotation.Transform(yvect);
}
if (KeyDown(VK_RIGHT))
{
rotation.Multiply(rollq);
up = rotation.Transform(yvect);
}
if (KeyDown(VK_UP))
{
rotation.Multiply(pitchq);
forward = rotation.Transform(zvect);
}
if (KeyDown(VK_DOWN))
{
rotation.Multiply(pitchmq);
forward = rotation.Transform(zvect);
}
if (KeyDown(VK_SPACE))
{
position.x += forward.x;
position.y += forward.y;
position.z += forward.z;
}


"rollmq" and "pitchmq" are the tiny angle quaternions for the negative turns. I figured that the up and forward vectors need only be updated if the roll and pitch change, respectively. If I update both each time, it still doesn't work, but it's behaves differently, so this might be a clue.


gluLookAt(position.x,position.y,position.z,position.x+forward.x,position.y+forward.y,position.z+forward.z,up.x,up.y,up.z);



This is now my only camera modifying line. It seems ok, giving position, position+forward, and up.

Share this post


Link to post
Share on other sites
This isn't a complete answer to your question (I didn't look at your code carefully enough to comment in detail), but here are a few notes:

1. The problems of a) orienting and moving the ship, b) constructing an object matrix for the ship and rendering it, and c) constructing a view matrix for the camera should all be considered separately. I mention this because the code you posted seems to include elements of the solutions to all three problems, but itself is not the correct solution for any of them. Thinking about and solving the problems separately should help clear up some of this confusion.

2. Remember that when transforms are applied via OpenGL function calls, the order in which the transforms are applied is the opposite of the order in which the corresponding OpenGL function calls appear in the code.

3. The 'model transform' for an object typically consists of the transform sequence scale->rotate->translate (any of these is of course optional, and scale is often simply identity).

4. The 'view transform' that corresponds to a 'model transform' is, generally the speaking, the inverse of that transform. There are various ways the inverse can be computed. In your case it appears you're trying to do it manually by applying the inverse of the individual transforms in the opposite order. Leaving aside scale, this should translate to (translate^-1)->(rotation^1), where translate^-1 is the original translation negated, and rotation^-1 is the original rotation inverted (transpose for a matrix, conjugate for a quaternion, negation of angle or axis for an axis-angle pair).

5. Third-person cameras are a different problem. It looks like you're already taking this approach, but it would probably be best to get basic object motion and rendering and first-person camera mode working before trying to implement a proper 3rd-person camera.

I hope these notes will help you to identify some of the problems in your code. Feel free to post back if you have further questions.

Share this post


Link to post
Share on other sites
Despite not having a clue what your post was saying, it allowed me somehow to fix the problem entirely.

Using the new vector approach with gluLookAt solved my translation problems but the gimbal lock came back. That was quite annoying. My fix?

Reverse the order in which I multiplied my quaternions to add rotations.


if (KeyDown(VK_LEFT))
{
Quaternion temp = rollmq;
temp.Multiply(rotation);
rotation = temp;
//rotation.Multiply(rollmq);
}




And if anyone would like to know, I intend this to eventually become a space fighter game where you aren't limited to fighting in one plane (the 2d space kind of plane, not the vehicle). Also, I plan to control it with wiimotes :-D

Share this post


Link to post
Share on other sites
Unfortunately, you seem to be getting way ahead of yourself. You need to have a grasp on linear algebra(at least the parts that pertain to 3D graphics) and the OpenGL API. I'd suggest getting yourself a book, there are plenty of good ones out there on the subject. I personally liked "Mathematics for 3D Game Programming and Computer Graphics" and "3D Math Primer for Graphics and Game Development." Yes, it is true that you can learn plenty about all the fancy pants stuff out there just by using google, however, it *appears* as though you lack the basic understanding of what's really going on when you make these calls. It is incredibly important that you do understand that in order to use it properly.

That being said, here is the important parts of my camera class, which is far from perfect but may shine some light on it.


#import <OpenGL/gl.h>
#import <OpenGL/glu.h>

#import "OCCamera.h"

@implementation OCCamera

- (id)initWithLocation:(vector_t)loc width:(int)w height:(int)h
{
[super init];

position = loc;

screenWidth = w;
screenHeight = h;
screenRatio = (float)(screenWidth/screenHeight);
near = 1.0f;
far = 768.0;
fov = 45.0f;

rotation = quaternion_identity();
//Completely unneccesary, but a good reminder.
forward = quaternion_rotate_vector(rotation, vector3(0,0,1));
up = quaternion_rotate_vector(rotation, vector3(0,1,0));
right = quaternion_rotate_vector(rotation, vector3(1,0,0));
yaw = pitch = 0.0f;

interpolationSpeed = 1.0f;

return self;
}

- (void)animate:(float)dt
{
if(allowInterpolation)
{
elapsedTime += dt * interpolationSpeed;
if(elapsedTime > 1.0f)
{
elapsedTime = 1.0f;
allowInterpolation = false;
}
position = vector_add(initPosition, vector_scale(vector_subtract(destPosition, initPosition), elapsedTime));
rotation = Quaternion_SLERP(initRotation, destRotation, elapsedTime);

forward = quaternion_rotate_vector(rotation, vector3(0,0,1));
up = quaternion_rotate_vector(rotation, vector3(0,1,0));
right = quaternion_rotate_vector(rotation, vector3(1,0,0));

yaw = atan2(forward.x, forward.z);
pitch = acos(vector_dot_product(vector3(0, 1, 0), forward)) - OSML_PI / 2.0f;
}

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(fov, screenRatio, near, far);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

gluLookAt(position.x, position.y, position.z,
position.x + forward.x, position.y + forward.y, position.z + forward.z,
up.x, up.y, up.z);
}

- (void)rotateYaw:(double)delta
{
if(allowInterpolation)
return;

yaw += delta;
quaternion_t qPitch = quaternion_from_angle_around_axis(pitch, vector3(1,0,0));
quaternion_t qYaw = quaternion_from_angle_around_axis(yaw, vector3(0,1,0));

rotation = quaternion_product(qYaw, qPitch);

forward = quaternion_rotate_vector(rotation, vector3(0,0,1));
up = quaternion_rotate_vector(rotation, vector3(0,1,0));
right = quaternion_rotate_vector(rotation, vector3(1,0,0));
}
- (void)rotatePitch:(double)delta
{
if(allowInterpolation)
return;

pitch += delta;
quaternion_t qPitch = quaternion_from_angle_around_axis(pitch, vector3(1,0,0));
quaternion_t qYaw = quaternion_from_angle_around_axis(yaw, vector3(0,1,0));

rotation = quaternion_product(qYaw, qPitch);

forward = quaternion_rotate_vector(rotation, vector3(0,0,1));
up = quaternion_rotate_vector(rotation, vector3(0,1,0));
right = quaternion_rotate_vector(rotation, vector3(1,0,0));
}
- (void)setPitch:(double)p
{
if(allowInterpolation)
return;
pitch = p;
[self rotatePitch:0];
}
- (void)setYaw:(double)p
{
if(allowInterpolation)
return;
yaw = p;
[self rotateYaw:0];
}
- (vector_t)targetPoint:(vector_t)point distance:(float)f
{
return vector3(point.x - forward.x * f, point.y - forward.y * f, point.z - forward.z * f);
}
- (void)targetOnPoint:(vector_t)point distance:(float)f
{
if(allowInterpolation)
return;

position.y = point.y - forward.y * f;
position.z = point.z - forward.z * f;
position.x = point.x - forward.x * f;
}
- (void)orbitYaw:(double)amt aroundPoint:(vector_t)center
{
if(allowInterpolation)
return;

vector_t newPos;
quaternion_t newRot;
float radius = sqrtf(pow(position.x - center.x, 2) + pow(position.z - center.z, 2));

yawOrbit += amt;
newPos.x = center.x + cos(yawOrbit + OSML_HALF_PI) * radius;
newPos.y = position.y;
newPos.z = center.z - sin(yawOrbit + OSML_HALF_PI) * radius;

yaw += amt;
quaternion_t qPitch = quaternion_from_angle_around_axis(pitch, vector3(1,0,0));
quaternion_t qYaw = quaternion_from_angle_around_axis(yaw, vector3(0,1,0));
newRot = quaternion_product(qYaw, qPitch);

position = newPos;
[self rotateTo:newRot];
}
- (void)rotateTo:(quaternion_t)q
{
if(allowInterpolation)
return;

rotation = q;
forward = quaternion_rotate_vector(rotation, vector3(0,0,1));
up = quaternion_rotate_vector(rotation, vector3(0,1,0));
right = quaternion_rotate_vector(rotation, vector3(1,0,0));
}
- (bool)interpolateTo:(vector_t)pos withRotation:(quaternion_t)rot withSpeed:(float)speed cancelPrevious:(bool)cancel
{
if(allowInterpolation && !cancel)
return false;

interpolationSpeed = speed;
allowInterpolation = true;

elapsedTime = 0.0f;
initPosition = position;
destPosition = pos;
initRotation = rotation;
destRotation = rot;
return true;
}
- (void)moveForward:(double)amt
{
if(allowInterpolation)
return;

position.x += forward.x * amt;
position.y += forward.y * amt;
position.z += forward.z * amt;
}
- (void)moveRight:(double)amt
{
if(allowInterpolation)
return;

position.x -= right.x * amt;
position.y -= right.y * amt;
position.z -= right.z * amt;
}
- (void)moveUp:(double)amt
{
if(allowInterpolation)
return;

position.x += up.x * amt;
position.y += up.y * amt;
position.z += up.z * amt;
}
- (void)moveTo:(vector_t)pos
{
if(allowInterpolation)
return;

position = pos;
}
@end

Share this post


Link to post
Share on other sites
Well, first of all, it's fixed, thank you everyone for your insight.

Second, I'll not take offense to your comments, but your assumptions were wrong, Longjumper. I do have a grasp of linear algebra. I've been through a college course on it. Just last semester, in fact, with a primer on it (especially how it pertains to transformations) in my previous calculus 3 class. The only thing that was new to me here was the quaternion itself. I'm also in a class called Numerical Methods right now. You can probably guess I'm a CS major.

Thanks again to everyone.

Share this post


Link to post
Share on other sites
No offense was intended, of course. And as always, I will admit when I am wrong, and in this case I may be. However, I took a class in linear algebra and numerical methods some years back, and it is only recently that I have truly grasped it intuitively. Perhaps I was just projecting myself onto you, though. ;)

Share this post


Link to post
Share on other sites
Ok I have a different question, but it's still about the camera so I decided not to make a new thread. It's about which of two methods to use for third person chase cam. It seems to me there are two equally valid possibilities:

A.
1) Translate a set Z distance from the origin
2) Draw the player object at the origin (after rotating properly)
3) gluLookAt the world based on player position

B.
1) gluLookAt
2) Draw the player at the correct position it stores

This second method I know requires calculating the camera forward vector to have a consistent length maintained in chase, and also finding the proper camera position in the direction behind the player. Does the first method require this as well, or are the gluLookAt parameters simpler? I had what seemed to be a working 3rd person camera but then I guessed (and still am not sure) that when I rotated pitch, the player object was technically moving in space while the camera spun.

Share this post


Link to post
Share on other sites
You will want to use the forward and up vectors of the player, scale them appropriately, and use that as the position of the camera in gluLookAt. The center of the camera(reference point) will be the player's position, and you will have to do some vector manipulation to determine the up vector for gluLookAt.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627733
    • Total Posts
      2978839
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now