• 15
• 15
• 11
• 9
• 10
• ### Similar Content

• By elect
Hi,
ok, so, we are having problems with our current mirror reflection implementation.
At the moment we are doing it very simple, so for the i-th frame, we calculate the reflection vectors given the viewPoint and some predefined points on the mirror surface (position and normal).
Then, using the least squared algorithm, we find the point that has the minimum distance from all these reflections vectors. This is going to be our virtual viewPoint (with the right orientation).
After that, we render offscreen to a texture by setting the OpenGL camera on the virtual viewPoint.
And finally we use the rendered texture on the mirror surface.
So far this has always been fine, but now we are having some more strong constraints on accuracy.
What are our best options given that:
- we have a dynamic scene, the mirror and parts of the scene can change continuously from frame to frame
- we have about 3k points (with normals) per mirror, calculated offline using some cad program (such as Catia)
- all the mirror are always perfectly spherical (with different radius vertically and horizontally) and they are always convex
- a scene can have up to 10 mirror
- it should be fast enough also for vr (Htc Vive) on fastest gpus (only desktops)

Looking around, some papers talk about calculating some caustic surface derivation offline, but I don't know if this suits my case
Also, another paper, used some acceleration structures to detect the intersection between the reflection vectors and the scene, and then adjust the corresponding texture coordinate. This looks the most accurate but also very heavy from a computational point of view.

Other than that, I couldn't find anything updated/exhaustive around, can you help me?

• Hello all,
I am currently working on a game engine for use with my game development that I would like to be as flexible as possible.  As such the exact requirements for how things should work can't be nailed down to a specific implementation and I am looking for, at least now, a default good average case scenario design.
Here is what I have implemented:
Deferred rendering using OpenGL Arbitrary number of lights and shadow mapping Each rendered object, as defined by a set of geometry, textures, animation data, and a model matrix is rendered with its own draw call Skeletal animations implemented on the GPU.   Model matrix transformation implemented on the GPU Frustum and octree culling for optimization Here are my questions and concerns:
Doing the skeletal animation on the GPU, currently, requires doing the skinning for each object multiple times per frame: once for the initial geometry rendering and once for the shadow map rendering for each light for which it is not culled.  This seems very inefficient.  Is there a way to do skeletal animation on the GPU only once across these render calls? Without doing the model matrix transformation on the CPU, I fail to see how I can easily batch objects with the same textures and shaders in a single draw call without passing a ton of matrix data to the GPU (an array of model matrices then an index for each vertex into that array for transformation purposes?) If I do the matrix transformations on the CPU, It seems I can't really do the skinning on the GPU as the pre-transformed vertexes will wreck havoc with the calculations, so this seems not viable unless I am missing something Overall it seems like simplest solution is to just do all of the vertex manipulation on the CPU and pass the pre-transformed data to the GPU, using vertex shaders that do basically nothing.  This doesn't seem the most efficient use of the graphics hardware, but could potentially reduce the number of draw calls needed.

Really, I am looking for some advice on how to proceed with this, how something like this is typically handled.  Are the multiple draw calls and skinning calculations not a huge deal?  I would LIKE to save as much of the CPU's time per frame so it can be tasked with other things, as to keep CPU resources open to the implementation of the engine.  However, that becomes a moot point if the GPU becomes a bottleneck.

• Hello!
I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub.
Features:
True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported:
Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics
Initialization
The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode:
#include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ...  GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources
Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer:
BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example:
TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State
Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.)
To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member:
SHADER_SOURCE_LANGUAGE_DEFAULT  - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL  - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL  - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables:
Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine.
The following is an example of shader initialization:
To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format:
// This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below:
// Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO:
Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object:

m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object:
m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details.
Setting the Pipeline State and Invoking Draw Command
Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context:
// Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context:
m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples
The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage.

AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface.

Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc.

The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures.

Integration with Unity
Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.

• By Yxjmir
I'm trying to load data from a .gltf file into a struct to use to load a .bin file. I don't think there is a problem with how the vertex positions are loaded, but with the indices. This is what I get when drawing with glDrawArrays(GL_LINES, ...):

Also, using glDrawElements gives a similar result. Since it looks like its drawing triangles using the wrong vertices for each face, I'm assuming it needs an index buffer/element buffer. (I'm not sure why there is a line going through part of it, it doesn't look like it belongs to a side, re-exported it without texture coordinates checked, and its not there)
I'm using jsoncpp to load the GLTF file, its format is based on JSON. Here is the gltf struct I'm using, and how I parse the file:
glBindVertexArray(g_pGame->m_VAO);
glDrawElements(GL_LINES, g_pGame->m_indices.size(), GL_UNSIGNED_BYTE, (void*)0); // Only shows with GL_UNSIGNED_BYTE
glDrawArrays(GL_LINES, 0, g_pGame->m_vertexCount);
So, I'm asking what type should I use for the indices? it doesn't seem to be unsigned short, which is what I selected with the Khronos Group Exporter for blender. Also, am I reading part or all of the .bin file wrong?
Test.gltf
Test.bin

• That means how do I use base DirectX or OpenGL api's to make a physics based destruction simulation?
Will it be just smart rendering or something else is required?

# OpenGL Quaternion camera, _translation_ problem

This topic is 4029 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I learned all about quaternions yesterday and successfully implemented them in OpenGL for my camera. I can spin about just wonderfully now with no gimbal lock. I'm making a ship-flying type game, and so obviously I want to be able to move the ship forward, in the local z direction. My code keeps track of the rotation quaternion, and with rotations I just have a small pitch or roll angle quaternion to multiply with and get the new rotation. To keep track of my world position, I have xpos, ypos, zpos. My problem seems to be getting the correct view vector out of the quaternion. My initial vector, for movement, is (0, 0, 1) because I want to move in Z. To rotate it with the quaternion, I looked up how to turn the quat into a rotation matrix. Multiplying that by (0,0,1) gives the third column of the matrix. These (I think) should be the new view vector, which I should add to my world position. Here's my code:
if (KeyDown(VK_SPACE))
{
xpos += 2*rotation.x*rotation.z - 2*rotation.w*rotation.y;
ypos += 2*rotation.y*rotation.z + 2*rotation.w*rotation.x;
zpos += rotation.w*rotation.w - rotation.x*rotation.x - rotation.y*rotation.y + rotation.z*rotation.z;
}


...
glTranslatef(0.0f,0.0f,-6.0f);	// Move Into The Screen

// draw the ship here

glRotatef(114.6*acos(rotation.w),rotation.x,rotation.y,rotation.z);

glTranslatef(xpos,ypos,zpos);


From the start, when I move into Z, it works. When I pitch myself up and move into Y, it works. When I roll to one side and then pitch myself into the X, it moves in Y instead. I tried a random guess fix and swapped the + and - signs in the xpos and ypos formulae, to get the bottom row of the matrix instead. This actually fixed the x problem, so now movement on all three axes moved correctly by themselves. The big problem is still that this movement doesn't work. It seems to at first, but after a bit of flying around, I start sliding sideways, backwards, down, some combination of them, etc. Is my math wrong, or something else in my code?

##### Share on other sites
glRotate isn't going to be useful in this case. In fact, glRotate is rarely useful at all. With quaternions and OpenGL, you'll usually be constructing a matrix from the quaternion. In this case, since you are just doing a camera, it's even simpler because gluLookAt will multiply the appropriate matrix for you, without having to do it by hand.

All you need to do is write a method that rotates a vector by a quaternion. Use that to rotate {0, 0, 1} and {0, 1, 0} to get your appropriate forward and up vectors, and just plug those directly into gluLookAt(position, position + forward, up);

##### Share on other sites
Ok, that took a few times reading through, but I think I get it. Instead of rotate, do the use the entire quaternion->rotation matrix and multiply in my own method. Then use it on those two vectors, one for my forward flight direction and the other for my ship's roll / up direction. Use that for the camera instead of translating.

Will that fix my flight problem? I'm assuming I then take my transformed forward vector and add some speed multiple of it to my position vector.

##### Share on other sites
Quote:
 Original post by TesserexOk, that took a few times reading through, but I think I get it. Instead of rotate, do the use the entire quaternion->rotation matrix and multiply in my own method. Then use it on those two vectors, one for my forward flight direction and the other for my ship's roll / up direction. Use that for the camera instead of translating.Will that fix my flight problem? I'm assuming I then take my transformed forward vector and add some speed multiple of it to my position vector.
Looking at the code you posted earlier:

glTranslatef(0.0f,0.0f,-6.0f);	// Move Into The Screen// draw the ship here	glRotatef(114.6*acos(rotation.w),rotation.x,rotation.y,rotation.z);glTranslatef(xpos,ypos,zpos);

I'm unclear as to whether this is intended to be a camera or object transform, but in either case the sequence of transforms appears to be incorrect.

Can you clarify the purpose of the transform? You mentioned that this was for a camera, but the comment 'draw the ship here' seems to indicate otherwise.

Anyway, as mentioned, when working with a rotation in quaternion form it's typical to convert it to a matrix before submitting it to OpenGL. Furthermore, the direction vectors can be extracted directly from this matrix; there's no need to perform additional vector rotations to derive these vectors.

As for 114.6, that is one 'magic number' :) I see what you're doing (converting to degrees and multiplying by two), but it would be far better to write a simple utility function to perform the conversion (even then using a named constant rather than 57.x), and then include the factor of 2 directly in the expression.

However, although it looks like this method should work, again, it would probably be better to use a matrix. Your method relies on the specifics of how a quaternion is used to represent a rotation; although it's important to understand these specifics, code that works with quaternions should treat them more like a 'black box'. In short, you should avoid mucking around with the quaternion elements directly in most cases.

If you can clarify what the purpose of the transform you posted is, we can probably point out more specifically where the code might be in error.

##### Share on other sites
Ok, for the purpose. Think Star Fox. Third person view. The ship is in the center of the screen, at it's location. The camera is a fixed distance directly behind it. So the first set of commands moves the camera back from the ship and draws the ship. Then we spin the world around the ship to orient it, then move the ship to it's location in space. Because the ship was already drawn, it's tied to the camera and they move together.

For now, though, I've removed that part and am going to get first person flying working first. It has the same problems though. I implemented some matrix stuff this time, and now my problems are backwards. The moving seems to work, but the rotating around is busted. Here's my new code:

Vector Transform(Vector v)	{		Vector nv;		nv.x = v.x*(w*w + x*x - y*y - z*z) + v.y*(2*x*y - 2*w*z) + v.z*(2*x*z + 2*w*y);		nv.y = v.x*(2*x*y + 2*w*z) + v.y*(w*w - x*x + y*y - z*z) + v.z*(2*y*z - 2*w*x);		nv.z = v.x*(2*x*z - 2*w*y) + v.y*(2*y*z + 2*w*x) + v.z*(w*w - x*x - y*y + z*z);		return nv;	}

You can probably tell, this just transforms any vector by the rotation matrix derived from the quaternion. This function is a member of my quaternion class.

Here's the keypress stuff...
if (KeyDown(VK_LEFT))	{		rotation.Multiply(rollmq);		up = rotation.Transform(yvect);	}	if (KeyDown(VK_RIGHT))	{		rotation.Multiply(rollq);		up = rotation.Transform(yvect);	}	if (KeyDown(VK_UP))	{		rotation.Multiply(pitchq);		forward = rotation.Transform(zvect);	}	if (KeyDown(VK_DOWN))	{		rotation.Multiply(pitchmq);		forward = rotation.Transform(zvect);	}	if (KeyDown(VK_SPACE))	{		position.x += forward.x;		position.y += forward.y;		position.z += forward.z;	}

"rollmq" and "pitchmq" are the tiny angle quaternions for the negative turns. I figured that the up and forward vectors need only be updated if the roll and pitch change, respectively. If I update both each time, it still doesn't work, but it's behaves differently, so this might be a clue.

gluLookAt(position.x,position.y,position.z,position.x+forward.x,position.y+forward.y,position.z+forward.z,up.x,up.y,up.z);

This is now my only camera modifying line. It seems ok, giving position, position+forward, and up.

##### Share on other sites
This isn't a complete answer to your question (I didn't look at your code carefully enough to comment in detail), but here are a few notes:

1. The problems of a) orienting and moving the ship, b) constructing an object matrix for the ship and rendering it, and c) constructing a view matrix for the camera should all be considered separately. I mention this because the code you posted seems to include elements of the solutions to all three problems, but itself is not the correct solution for any of them. Thinking about and solving the problems separately should help clear up some of this confusion.

2. Remember that when transforms are applied via OpenGL function calls, the order in which the transforms are applied is the opposite of the order in which the corresponding OpenGL function calls appear in the code.

3. The 'model transform' for an object typically consists of the transform sequence scale->rotate->translate (any of these is of course optional, and scale is often simply identity).

4. The 'view transform' that corresponds to a 'model transform' is, generally the speaking, the inverse of that transform. There are various ways the inverse can be computed. In your case it appears you're trying to do it manually by applying the inverse of the individual transforms in the opposite order. Leaving aside scale, this should translate to (translate^-1)->(rotation^1), where translate^-1 is the original translation negated, and rotation^-1 is the original rotation inverted (transpose for a matrix, conjugate for a quaternion, negation of angle or axis for an axis-angle pair).

5. Third-person cameras are a different problem. It looks like you're already taking this approach, but it would probably be best to get basic object motion and rendering and first-person camera mode working before trying to implement a proper 3rd-person camera.

I hope these notes will help you to identify some of the problems in your code. Feel free to post back if you have further questions.

##### Share on other sites
Despite not having a clue what your post was saying, it allowed me somehow to fix the problem entirely.

Using the new vector approach with gluLookAt solved my translation problems but the gimbal lock came back. That was quite annoying. My fix?

Reverse the order in which I multiplied my quaternions to add rotations.

if (KeyDown(VK_LEFT))	{		Quaternion temp = rollmq;		temp.Multiply(rotation);		rotation = temp;		//rotation.Multiply(rollmq);	}

And if anyone would like to know, I intend this to eventually become a space fighter game where you aren't limited to fighting in one plane (the 2d space kind of plane, not the vehicle). Also, I plan to control it with wiimotes :-D

##### Share on other sites
Unfortunately, you seem to be getting way ahead of yourself. You need to have a grasp on linear algebra(at least the parts that pertain to 3D graphics) and the OpenGL API. I'd suggest getting yourself a book, there are plenty of good ones out there on the subject. I personally liked "Mathematics for 3D Game Programming and Computer Graphics" and "3D Math Primer for Graphics and Game Development." Yes, it is true that you can learn plenty about all the fancy pants stuff out there just by using google, however, it *appears* as though you lack the basic understanding of what's really going on when you make these calls. It is incredibly important that you do understand that in order to use it properly.

That being said, here is the important parts of my camera class, which is far from perfect but may shine some light on it.

#import &lt;OpenGL/gl.h&gt;#import &lt;OpenGL/glu.h&gt;#import "OCCamera.h"@implementation OCCamera- (id)initWithLocation:(vector_t)loc width:(int)w height:(int)h{	[super init];		position = loc;	screenWidth = w;	screenHeight = h;	screenRatio = (float)(screenWidth/screenHeight);	near = 1.0f;	far = 768.0;	fov = 45.0f;		rotation = quaternion_identity();	//Completely unneccesary, but a good reminder.	forward = quaternion_rotate_vector(rotation, vector3(0,0,1));	up = quaternion_rotate_vector(rotation, vector3(0,1,0));	right = quaternion_rotate_vector(rotation, vector3(1,0,0));	yaw = pitch = 0.0f;		interpolationSpeed = 1.0f;		return self;}- (void)animate:(float)dt{	if(allowInterpolation)	{		elapsedTime += dt * interpolationSpeed;		if(elapsedTime &gt; 1.0f)		{			elapsedTime = 1.0f;			allowInterpolation = false;		}		position = vector_add(initPosition, vector_scale(vector_subtract(destPosition, initPosition), elapsedTime));		rotation = Quaternion_SLERP(initRotation, destRotation, elapsedTime);				forward = quaternion_rotate_vector(rotation, vector3(0,0,1));		up = quaternion_rotate_vector(rotation, vector3(0,1,0));		right = quaternion_rotate_vector(rotation, vector3(1,0,0));		yaw = atan2(forward.x, forward.z);		pitch = acos(vector_dot_product(vector3(0, 1, 0), forward)) - OSML_PI / 2.0f;	}	glMatrixMode(GL_PROJECTION);		glLoadIdentity();			gluPerspective(fov, screenRatio, near, far);	glMatrixMode(GL_MODELVIEW);		glLoadIdentity();			gluLookAt(position.x, position.y, position.z,			  position.x + forward.x, position.y + forward.y, position.z + forward.z,			  up.x, up.y, up.z);}- (void)rotateYaw:(double)delta{	if(allowInterpolation)		return;			yaw += delta;	quaternion_t qPitch = quaternion_from_angle_around_axis(pitch, vector3(1,0,0));	quaternion_t qYaw = quaternion_from_angle_around_axis(yaw, vector3(0,1,0));		rotation = quaternion_product(qYaw, qPitch);		forward = quaternion_rotate_vector(rotation, vector3(0,0,1));	up = quaternion_rotate_vector(rotation, vector3(0,1,0));	right = quaternion_rotate_vector(rotation, vector3(1,0,0));}- (void)rotatePitch:(double)delta{	if(allowInterpolation)		return;		pitch += delta;	quaternion_t qPitch = quaternion_from_angle_around_axis(pitch, vector3(1,0,0));	quaternion_t qYaw = quaternion_from_angle_around_axis(yaw, vector3(0,1,0));		rotation = quaternion_product(qYaw, qPitch);		forward = quaternion_rotate_vector(rotation, vector3(0,0,1));	up = quaternion_rotate_vector(rotation, vector3(0,1,0));	right = quaternion_rotate_vector(rotation, vector3(1,0,0));}- (void)setPitch:(double)p{	if(allowInterpolation)		return;	pitch = p;	[self rotatePitch:0];}- (void)setYaw:(double)p{	if(allowInterpolation)		return;	yaw = p;	[self rotateYaw:0];}	- (vector_t)targetPoint:(vector_t)point distance:(float)f{	return vector3(point.x - forward.x * f, point.y - forward.y * f, point.z - forward.z * f);}- (void)targetOnPoint:(vector_t)point distance:(float)f{	if(allowInterpolation)		return;		position.y = point.y - forward.y * f;	position.z = point.z - forward.z * f;	position.x = point.x - forward.x * f;}- (void)orbitYaw:(double)amt aroundPoint:(vector_t)center{		if(allowInterpolation)		return;		vector_t newPos;	quaternion_t newRot;	float radius = sqrtf(pow(position.x - center.x, 2) + pow(position.z - center.z, 2));	yawOrbit += amt;	newPos.x = center.x + cos(yawOrbit + OSML_HALF_PI) * radius;	newPos.y = position.y;	newPos.z = center.z - sin(yawOrbit + OSML_HALF_PI) * radius;	yaw += amt;	quaternion_t qPitch = quaternion_from_angle_around_axis(pitch, vector3(1,0,0));	quaternion_t qYaw = quaternion_from_angle_around_axis(yaw, vector3(0,1,0));		newRot = quaternion_product(qYaw, qPitch);	position = newPos;	[self rotateTo:newRot];}- (void)rotateTo:(quaternion_t)q{	if(allowInterpolation)		return;			rotation = q;	forward = quaternion_rotate_vector(rotation, vector3(0,0,1));	up = quaternion_rotate_vector(rotation, vector3(0,1,0));	right = quaternion_rotate_vector(rotation, vector3(1,0,0));}- (bool)interpolateTo:(vector_t)pos withRotation:(quaternion_t)rot withSpeed:(float)speed cancelPrevious:(bool)cancel{	if(allowInterpolation && !cancel)		return false;		interpolationSpeed = speed;	allowInterpolation = true;		elapsedTime = 0.0f;	initPosition = position;	destPosition = pos;	initRotation = rotation;	destRotation = rot;	return true;}- (void)moveForward:(double)amt{	if(allowInterpolation)		return;		position.x += forward.x * amt;	position.y += forward.y * amt;	position.z += forward.z * amt;}- (void)moveRight:(double)amt{	if(allowInterpolation)		return;		position.x -= right.x * amt;	position.y -= right.y * amt;	position.z -= right.z * amt;}- (void)moveUp:(double)amt{	if(allowInterpolation)		return;		position.x += up.x * amt;	position.y += up.y * amt;	position.z += up.z * amt;}- (void)moveTo:(vector_t)pos{	if(allowInterpolation)		return;		position = pos;}@end

##### Share on other sites
Well, first of all, it's fixed, thank you everyone for your insight.

Second, I'll not take offense to your comments, but your assumptions were wrong, Longjumper. I do have a grasp of linear algebra. I've been through a college course on it. Just last semester, in fact, with a primer on it (especially how it pertains to transformations) in my previous calculus 3 class. The only thing that was new to me here was the quaternion itself. I'm also in a class called Numerical Methods right now. You can probably guess I'm a CS major.

Thanks again to everyone.