Camera 2D - transformations (rotation, translation, zoom)

Started by
4 comments, last by Zakwayda 13 years, 8 months ago
Hello,

I'm implementing a simple 2d renderer and I wanted to add a camera support that would allow me to rotate, translate and zoom the displayed view.

Can anyone explain to me how can I implement a function that returns 3x3 projection matrix, built from camera position, zoom factor and rotation angle?

This could be a prototype
Matrix3x3 MakeProjection(Vector2 vCenter, Vector2 vecZoom, float angle)

Maybe someone have a link to article or other resource covering this topic?


Advertisement
For zoom, I'd recommend modifying the parameters of the (presumably orthographic) projection transform rather than the view transform. Making the extent of the projection bounds smaller will zoom in; making them larger will zoom out.

For the view transform, you simply need to build a rotate-translate matrix from the orientation and position, and then invert it.

If you need more specific advice, perhaps you could clarify what part you need help with.
What I wanted to do is to get the matrix that could be easily passed to IDirect3DDevice9::SetTransform() method.

I am not sure but I thought something like this should work:

|2/w 0 0 0|
|0 2/h 0 0|
|0 0 1/(zf-zn) 0| * rotation * translation
|0 0 -zn/(zf-zn) 1|

Multiplying these matrices should produce projection matrix with camera rotation and translation and I should pass this matrix to some rendering API.
I didn't proof your projection matrix, but it looks to me like you're not inverting the view matrix. I think you have the multiplication order wrong as well.

Also, if you're using the fixed-function pipeline, I think it's expected that the view and projection matrices will be set separately. (It may actually work to combine them, but I don't think that's how the API is intended to be used.)

The construction of the view matrix should look something like this (pseudocode):
matrix view = inverse(rotation * translation);
I had to rethink my problem with camera and I cannot do it this way I described above and it seems to be more complicated. I think I made a design mistake. So I would like to ask you some other question.

In my 2D renderer all primitives are gathered and drawn in single batch. I think I should transform primitive's vertices against camera transformation and then pass transformed primitive to renderer.

I would like to know what do you think of this kind of solution. maybe some of you have already done 2D camera in your projects in other way?
Quote:Original post by haci2x
In my 2D renderer all primitives are gathered and drawn in single batch. I think I should transform primitive's vertices against camera transformation and then pass transformed primitive to renderer.

I would like to know what do you think of this kind of solution. maybe some of you have already done 2D camera in your projects in other way?
First of all, the concepts behind a 2-d camera are really no different than those behind a 3-d camera. You might use a different projection type, and some other minor details might be different, but the basic approach is the same.

Typically, each object in a game has a world transform associated with it (which may be identity), and then you have a 'view' transform that represents the camera. Typically you would not transform geometry manually into camera space; that shouldn't be necessary under normal circumstances.

Basically, you manipulate your camera as you would any other object, compute its world transform, invert the world transform to yield a view transform, and then upload this to Direct3D, either directly as the 'view' matrix for the fixed-function pipeline, or as an individual matrix or part of a combined matrix for use in a shader. The object world transforms are uploaded to Direct3D in a similar fashion. This is the usual approach, and should be applicable in your case unless you're doing something unusual.

This topic is closed to new replies.

Advertisement