Super Simple Camera Question

Started by
4 comments, last by ill 11 years, 1 month ago

I'm almost embarrassed to ask this, is a super easy question i can't seem to find an answer to.

I'm starting out with shaders, and have a few simple questions.

Is my view matrix just the inverse of the camera's world position & orientation?

Would it be the inverse of the cameras world position, multiplied by the projections matrix, or the projection matrix multiplyed by the camera inverse?

Assuming the above matrix that gets passed into the shader (let's call it mvp), do i multiply the vertex position with mvp, or mvp with the vertex position?

I know, kind of dumb questions, but help is super appreciated!

Advertisement

Think about this, in matrix multiplication the order matters, and you have to transform in the reverse order that you want.

ex. For a model I'd want to rotate first (so it rotates around its center), then translate it around the world, then put everything relative to my camera and finally project everything to my view frustrum.

So you'd say that: final_position = projectionMat * viewMat * translationMat * rotationMat * vertex.

You can pre multiply the translation and rotation matrices to get your model to world matrix (from model space to world space), or as you said, pre multiply all matrices to get the model view projection matrix (from model space directly to view space).

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

I've figured out that the modelView matrix is the inverse of the camera transform matrix.

You don't even have to do an inverse, you can do a fast affine inverse which I think is just a transpose. This is because the matrix properly represents rotation and position. Don't quote me 100% on this because I'm not 100% sure but more like 95% sure about the math behind it. All that matters is if you use some math library that has a fast affine matrix inverse rather than regular inverse, use that. And if you're rolling your own, do a bit of research but I think it's just a transpose.

Also the normal matrix is just the 3x3 portion of the modelView matrix. Don't make the mistake I made and send the normal matrix untransformed by the model's transform. I was sending the modelViewProjection * object transform to the shader but sending the 3x3 portion of the untransformed modelView matrix. The correct thing to send was modelView * object transform and get the 3x3 portion out of that.

Thanks for the help!

TheChubu, model-view-projection in reverse. That makes sense now :)

ill, I'll look into the affine inverse, thanks for the tip

THANKS AGAIN!

I've figured out that the modelView matrix is the inverse of the camera transform matrix.

Close, but I think you mean the view matrix is.

Is my view matrix just the inverse of the camera's world position & orientation?

Perhaps you should clarify. The view matrix is the matrix that reformulates a point in the standard orthonormal basis as a point in an orthonormal basis centered at and oriented to the camera.

Here's the idea:
--There are three matrices: model, view, and projection (M, V, and P).
--There are four spaces we're concerned with: object (the coordinates in which model geometry is defined), world (the coordinates for which all objects are defined), eye (the coordinates relative to the camera), clip (precursor to being on the screen)
--If x is a vector in object space, y=M*x is that vertex in world space
--If y is a vector in world space, z=V*y is that vertex in eye space
--If z is a vector in eye space, w=P*v is that vertex in clip space

Consequently, the vertex transform looks like:

w = P * V * M * x

Note that the "P * V * M" product can be premultiplied into one matrix.

In the OpenGL 2 fixed-function pipeline, the M and V matrices actually *are* one matrix, but it has the same meanings.

[size="1"]And a Unix user said rm -rf *.* and all was null and void...|There's no place like 127.0.0.1|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.
[size="2"]

I derive the camera transform by getting the camera position and then transforming that by the direction, which is often derived from a quaternion. Then taking the inverse of that seems to give me the correct modelView matrix for my shaders. Is that not right? It's been working for me all this time.

I myself don't really have experience with the model matrix/world matrix and the view matrix being separate since I mostly work in OpenGL and have no experience with DirectX other than looking at sample source and HLSL shaders for ideas. As far as I understand, the model matrix in OpenGL is the world matrix in DirectX?

In OpenGL, or any right handed coordinate system, I get the correct transform with MV * P * vertex position. And I'm guessing MV is M * V, but since I never had to do the multiplication myself I can't be sure.

This topic is closed to new replies.

Advertisement