• 10
• 12
• 12
• 14
• 16

# rendering a scene with magnification

This topic is 506 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

rendering a scene with magnification

how is it done?

a few methods come to mind....

1.  move everything x times closer to the camera, then render as usual.  seems pretty easy, given position vector p0 = x,y,z from camera to object, and a magnification factor of m, new position is just p1 = (x/m,y/m,z/m) - right?

2. move the far plane x times farther out, and render everything at x times normal size.   zbuf would be lower resolution, possibly leading to z fighting.

3.  render to offscreen, then zoom by x amount.  doesn't get stuff beyond the far plane that would be visible under magnification. doesn't get stuff within the viewing frustum that would be visible under magnification, but not visible without magnification.

what about the field of view? does that need to be adjusted? could FOV be used? i know narrow FOVs give a "scope" like effect....

##### Share on other sites

FOV is the primary option (this is how MDK or Quake 3 zoom guns worked).

The more you magnify using a smaller FOV, the less perspective distortion appears and it might become harder to focus on something in motion.

Moving closer to the spot is the second option, but the user might recognize the change in camera position.

Doing both at the same time but in opposite directions can cause the nice 'wow'-effect often seen in films (e.g. face becomes larger while the background becomes smaller).

##### Share on other sites

i'm wondering what the difference between method #1 and reduced FOV would be.    i've experimented a lot with FOVs, trying to reproduce human vision better.  so i'm quite familiar with both the scope effect (low FOV) and turning the world inside out (FOV > 180).

is there a mathematical relation between FOV and apparent magnification?    i'd have to check the FOV formulas, don't recall offhand.    suspect there is....     might be an easy way out.

method 1 might give a fisheye type effect, since it doesn't narrow the FOV.

##### Share on other sites

From your initial post there seems no big difference between methon 1 & 2. Method 1 does a uniform scale around the camear, for method 2 you did not specify the origin but it's a uniform scale too so any differnce will not affect how the scene looks like.

i've experimented a lot with FOVs, trying to reproduce human vision better

This sounds what you really want is something like: Objects at the center of the view should appear large, but you also want a large angle of vision?

For an example see the image here: http://www.gamedev.net/topic/667455-how-are-sphere-maps-created/

The math here is simple, but the problem is that this leads to a nonlinear projection: Straight lines in worldspace become curves on screen, so traditional triangle rasterization can't do this (raytracing can easily).

You could render with FOV close to 180 and do postprocessing to magnify the center of the image, but you would need very high resolution - unfortunately hou have little details wher you need the most and vice versa.

The solution might be to render 4 images to compensate for that (something like looking at the corner of a cubemap from it's center), and combine them to a final image.

I think this will become standart in the far future (at least for wide fov VR headsets), but i'm not sure how people would approve the unavoidable fish eye look on a flat screen.

##### Share on other sites

From your initial post there seems no big difference between methon 1 & 2.

yes - zbuf resolution seems to be about it.

For an example see the image here

yeah - that's what i meant by fisheye. don't want that.

it will be used to implement:

"Put in on the screen Mr Sulu, magnification 50."    - i'm working on SIMSpace  (formerly SIMTrek)

guess FOV will be the easy way to go.   well, at least you don't get weirdness at low FOV like you do at high (IE edge curling), so it should be ok.

now i need to look up or determine the relation between FOV and apparent scale.

##### Share on other sites

"Put in on the screen Mr Sulu, magnification 50." - i'm working on SIMSpace (formerly SIMTrek)

Ha, ok :)

E.g. You hace 3D scene on a screen sized 1000 * 500, and you want to take a rectangle form pixels (4,5) to (90, 55) and this rectangle should be upscaled to fill the screen.

So it's about a area on the upper left of the view port.

By changing FOV you can only zomm to the center rgion of the screen.

To get the offset to the left and upwards you need to change POV as well ("Point of View", the "focus point" - hope that's the proper term).

Usually this point is exactly at the screen center, and mostly that's not mentioned further anywhere. But in your case you need to move this point even out of the screen.

I figured the math out years back and might still have the code, but first i'll try to find something on the net...

##### Share on other sites

... no luck - my own code does not create a final 4x4 matrix (used in software renderer), and i have not found any useful rosources on the topic.

I think the way to go is to look up how OpenGL picking works (gluPickMatrix, that's exactly the same - bulid a projection matrix from the current and a small rectangle on the screen covering the mouse pointer.)

I've found this in glm library (matrix_transform.inl)

template <typename T, precision P, typename U>
GLM_FUNC_QUALIFIER tmat4x4<T, P> pickMatrix(tvec2<T, P> const & center, tvec2<T, P> const & delta, tvec4<U, P> const & viewport)
{
assert(delta.x > static_cast<T>(0) && delta.y > static_cast<T>(0));
tmat4x4<T, P> Result(static_cast<T>(1));

if(!(delta.x > static_cast<T>(0) && delta.y > static_cast<T>(0)))
return Result; // Error

tvec3<T, P> Temp(
(static_cast<T>(viewport[2]) - static_cast<T>(2) * (center.x - static_cast<T>(viewport[0]))) / delta.x,
(static_cast<T>(viewport[3]) - static_cast<T>(2) * (center.y - static_cast<T>(viewport[1]))) / delta.y,
static_cast<T>(0));

// Translate and scale the picked region to the entire window
Result = translate(Result, Temp);
return scale(Result, tvec3<T, P>(static_cast<T>(viewport[2]) / delta.x, static_cast<T>(viewport[3]) / delta.y, static_cast<T>(1)));
}


I guess center is the 2D region center, delta is 2D width & height, view port is the usual OpenGL data structure.

Probably you have to multiply your projection matrix with the result or vice versa.

Looks like a simple scale and translate operation (But i have only weak understanding how projections matrices work, so i wonder it is that easy)

##### Share on other sites

... no luck - my own code does not create a final 4x4 matrix (used in software renderer), and i have not found any useful rosources on the topic.

I think the way to go is to look up how OpenGL picking works (gluPickMatrix, that's exactly the same - bulid a projection matrix from the current and a small rectangle on the screen covering the mouse pointer.)

I've found this in glm library (matrix_transform.inl)

template <typename T, precision P, typename U>
GLM_FUNC_QUALIFIER tmat4x4<T, P> pickMatrix(tvec2<T, P> const & center, tvec2<T, P> const & delta, tvec4<U, P> const & viewport)
{
assert(delta.x > static_cast<T>(0) && delta.y > static_cast<T>(0));
tmat4x4<T, P> Result(static_cast<T>(1));

if(!(delta.x > static_cast<T>(0) && delta.y > static_cast<T>(0)))
return Result; // Error

tvec3<T, P> Temp(
(static_cast<T>(viewport[2]) - static_cast<T>(2) * (center.x - static_cast<T>(viewport[0]))) / delta.x,
(static_cast<T>(viewport[3]) - static_cast<T>(2) * (center.y - static_cast<T>(viewport[1]))) / delta.y,
static_cast<T>(0));

// Translate and scale the picked region to the entire window
Result = translate(Result, Temp);
return scale(Result, tvec3<T, P>(static_cast<T>(viewport[2]) / delta.x, static_cast<T>(viewport[3]) / delta.y, static_cast<T>(1)));
}


I guess center is the 2D region center, delta is 2D width & height, view port is the usual OpenGL data structure.

Probably you have to multiply your projection matrix with the result or vice versa.

Looks like a simple scale and translate operation (But i have only weak understanding how projections matrices work, so i wonder it is that easy)

Yes, a pickMatrix-style approach is the simplest way of doing it. It is equivalent to just scaling (and potentially translating if the zoom isn't centered) your post-projection normalized device coordinates in the XY plane. And equivalent to computing a new perspective/ortho matrix with the same near and far values but top/left/bottom/right computed as required. As you render the exact same view as before but with part of it magnified, this is equivalent to approach number 3, except that you get the full screen/buffer resolution and don't get ugly scaling artifacts.

Doing it pickMatrix-style also means that you don't need separate versions for perspective and orthographic cameras (which you would if you chose to use fov/compute a new projection matrix from top/left/bottom/right).

Moving the camera definitely isn't a zoom at all. Although some applications do this kind of "dolly zoom" and just erroneously call it zoom.

I'm not sure what you mean by number 2. sounds as if you want to scale individual objects? That would lead to all sorts of issues (for example objects starting to interpenetrate after scaling).

Edited by l0calh05t

##### Share on other sites

"Put in on the screen Mr Sulu, magnification 50." - i'm working on SIMSpace (formerly SIMTrek)

Ha, ok :)

E.g. You hace 3D scene on a screen sized 1000 * 500, and you want to take a rectangle form pixels (4,5) to (90, 55) and this rectangle should be upscaled to fill the screen.

So it's about a area on the upper left of the view port.

By changing FOV you can only zomm to the center rgion of the screen.

To get the offset to the left and upwards you need to change POV as well ("Point of View", the "focus point" - hope that's the proper term).

Usually this point is exactly at the screen center, and mostly that's not mentioned further anywhere. But in your case you need to move this point even out of the screen.

I figured the math out years back and might still have the code, but first i'll try to find something on the net...

It looks like what you want is more of an off center projection matrix. Something like that, that way you get just a portion of the original render magnified.

// grid args in float for code readability and no cast, but are integer values
DirectX::XMMATRIX ComputeProjection( float fov, float aspect, float gridW, float gridH, float u, float v, float nearZ, float farZ) {
float horHalfTan = std::tan( fov / 2.f );
float verHalfTan = horHalfTan * aspectRatio;

float left = -horHalfTan + u * 2.f*horHalfTan / gridW;
float right = left + 2.f*horHalfTan / gridW;
float top = -verHalfTan + v * 2.f*verHalfTan / gridH;
float bottom = top + 2.f*verHalfTan / gridH;
return DirectX::XMMatrixPerspectiveOffCenterLH(left,right,top,bottom,nearZ,farZ);
}


##### Share on other sites

E.g. You hace 3D scene on a screen sized 1000 * 500, and you want to take a rectangle form pixels (4,5) to (90, 55) and this rectangle should be upscaled to fill the screen. So it's about a area on the upper left of the view port.

not quite.   you can't just scale in screen space, because all that does is magnify what is visible without magnification. it doesn't include things beyond the far clip plane which would be visible with magnification, or things inside the viewing frustum that scale to zero or less without magnification.    also, whether the camera is looking straight ahead or at at target, its should always scale with respect to the center of the screen, not the upper left edge of the viewing frustum. .

By changing FOV you can only zomm to the center rgion of the screen

that's what's required.    if the camera is looking at a klingon battlecruiser, as you zoom on the upper left section of the screen, the ship gets bigger and moves down and to the right off the screen.   as you zoom on the center of the screen, it gets bigger, but remains centered on the screen - which is the desired effect.

for this problem its best to think in terms of world space, not screen space. once you get to screen space, you've already lost data that should be there with magnification. so no screen space solution will work for all cases without workarounds.

And equivalent to computing a new perspective/ortho matrix with the same near and far values but top/left/bottom/right computed as required.

sounds a lot like changing FOV....