Sign in to follow this  
Norman Barrows

rendering a scene with magnification

Recommended Posts

rendering a scene with magnification

 

how is it done?

 

a few methods come to mind....

 

 

1.  move everything x times closer to the camera, then render as usual.  seems pretty easy, given position vector p0 = x,y,z from camera to object, and a magnification factor of m, new position is just p1 = (x/m,y/m,z/m) - right?

 

2. move the far plane x times farther out, and render everything at x times normal size.   zbuf would be lower resolution, possibly leading to z fighting.

 

3.  render to offscreen, then zoom by x amount.  doesn't get stuff beyond the far plane that would be visible under magnification. doesn't get stuff within the viewing frustum that would be visible under magnification, but not visible without magnification.

 

 

what about the field of view? does that need to be adjusted? could FOV be used? i know narrow FOVs give a "scope" like effect....

Share this post


Link to post
Share on other sites

FOV is the primary option (this is how MDK or Quake 3 zoom guns worked).

The more you magnify using a smaller FOV, the less perspective distortion appears and it might become harder to focus on something in motion.

Moving closer to the spot is the second option, but the user might recognize the change in camera position.

Doing both at the same time but in opposite directions can cause the nice 'wow'-effect often seen in films (e.g. face becomes larger while the background becomes smaller).

Share this post


Link to post
Share on other sites

i'm wondering what the difference between method #1 and reduced FOV would be.    i've experimented a lot with FOVs, trying to reproduce human vision better.  so i'm quite familiar with both the scope effect (low FOV) and turning the world inside out (FOV > 180).

 

is there a mathematical relation between FOV and apparent magnification?    i'd have to check the FOV formulas, don't recall offhand.    suspect there is....     might be an easy way out.

 

 

method 1 might give a fisheye type effect, since it doesn't narrow the FOV.

Share this post


Link to post
Share on other sites

From your initial post there seems no big difference between methon 1 & 2. Method 1 does a uniform scale around the camear, for method 2 you did not specify the origin but it's a uniform scale too so any differnce will not affect how the scene looks like.

 

i've experimented a lot with FOVs, trying to reproduce human vision better

 

This sounds what you really want is something like: Objects at the center of the view should appear large, but you also want a large angle of vision?

For an example see the image here: http://www.gamedev.net/topic/667455-how-are-sphere-maps-created/

 

The math here is simple, but the problem is that this leads to a nonlinear projection: Straight lines in worldspace become curves on screen, so traditional triangle rasterization can't do this (raytracing can easily).

You could render with FOV close to 180 and do postprocessing to magnify the center of the image, but you would need very high resolution - unfortunately hou have little details wher you need the most and vice versa.

The solution might be to render 4 images to compensate for that (something like looking at the corner of a cubemap from it's center), and combine them to a final image.

 

I think this will become standart in the far future (at least for wide fov VR headsets), but i'm not sure how people would approve the unavoidable fish eye look on a flat screen.

Share this post


Link to post
Share on other sites

From your initial post there seems no big difference between methon 1 & 2.

 

yes - zbuf resolution seems to be about it.

 

 

For an example see the image here

 

yeah - that's what i meant by fisheye. don't want that.

 

it will be used to implement:

 

"Put in on the screen Mr Sulu, magnification 50."    - i'm working on SIMSpace  (formerly SIMTrek) 

 

guess FOV will be the easy way to go.   well, at least you don't get weirdness at low FOV like you do at high (IE edge curling), so it should be ok.

 

now i need to look up or determine the relation between FOV and apparent scale.

Share this post


Link to post
Share on other sites

"Put in on the screen Mr Sulu, magnification 50." - i'm working on SIMSpace (formerly SIMTrek)

 

Ha, ok :)

 

E.g. You hace 3D scene on a screen sized 1000 * 500, and you want to take a rectangle form pixels (4,5) to (90, 55) and this rectangle should be upscaled to fill the screen.

So it's about a area on the upper left of the view port.

By changing FOV you can only zomm to the center rgion of the screen.

To get the offset to the left and upwards you need to change POV as well ("Point of View", the "focus point" - hope that's the proper term).

Usually this point is exactly at the screen center, and mostly that's not mentioned further anywhere. But in your case you need to move this point even out of the screen.

 

I figured the math out years back and might still have the code, but first i'll try to find something on the net...

Share this post


Link to post
Share on other sites

... no luck - my own code does not create a final 4x4 matrix (used in software renderer), and i have not found any useful rosources on the topic.

 

I think the way to go is to look up how OpenGL picking works (gluPickMatrix, that's exactly the same - bulid a projection matrix from the current and a small rectangle on the screen covering the mouse pointer.)

 

I've found this in glm library (matrix_transform.inl)

template <typename T, precision P, typename U>
    GLM_FUNC_QUALIFIER tmat4x4<T, P> pickMatrix(tvec2<T, P> const & center, tvec2<T, P> const & delta, tvec4<U, P> const & viewport)
    {
        assert(delta.x > static_cast<T>(0) && delta.y > static_cast<T>(0));
        tmat4x4<T, P> Result(static_cast<T>(1));

        if(!(delta.x > static_cast<T>(0) && delta.y > static_cast<T>(0)))
            return Result; // Error

        tvec3<T, P> Temp(
            (static_cast<T>(viewport[2]) - static_cast<T>(2) * (center.x - static_cast<T>(viewport[0]))) / delta.x,
            (static_cast<T>(viewport[3]) - static_cast<T>(2) * (center.y - static_cast<T>(viewport[1]))) / delta.y,
            static_cast<T>(0));

        // Translate and scale the picked region to the entire window
        Result = translate(Result, Temp);
        return scale(Result, tvec3<T, P>(static_cast<T>(viewport[2]) / delta.x, static_cast<T>(viewport[3]) / delta.y, static_cast<T>(1)));
    }

I guess center is the 2D region center, delta is 2D width & height, view port is the usual OpenGL data structure.

Probably you have to multiply your projection matrix with the result or vice versa.

 

Looks like a simple scale and translate operation (But i have only weak understanding how projections matrices work, so i wonder it is that easy)

Share this post


Link to post
Share on other sites

... no luck - my own code does not create a final 4x4 matrix (used in software renderer), and i have not found any useful rosources on the topic.

 

I think the way to go is to look up how OpenGL picking works (gluPickMatrix, that's exactly the same - bulid a projection matrix from the current and a small rectangle on the screen covering the mouse pointer.)

 

I've found this in glm library (matrix_transform.inl)

template <typename T, precision P, typename U>
    GLM_FUNC_QUALIFIER tmat4x4<T, P> pickMatrix(tvec2<T, P> const & center, tvec2<T, P> const & delta, tvec4<U, P> const & viewport)
    {
        assert(delta.x > static_cast<T>(0) && delta.y > static_cast<T>(0));
        tmat4x4<T, P> Result(static_cast<T>(1));

        if(!(delta.x > static_cast<T>(0) && delta.y > static_cast<T>(0)))
            return Result; // Error

        tvec3<T, P> Temp(
            (static_cast<T>(viewport[2]) - static_cast<T>(2) * (center.x - static_cast<T>(viewport[0]))) / delta.x,
            (static_cast<T>(viewport[3]) - static_cast<T>(2) * (center.y - static_cast<T>(viewport[1]))) / delta.y,
            static_cast<T>(0));

        // Translate and scale the picked region to the entire window
        Result = translate(Result, Temp);
        return scale(Result, tvec3<T, P>(static_cast<T>(viewport[2]) / delta.x, static_cast<T>(viewport[3]) / delta.y, static_cast<T>(1)));
    }

I guess center is the 2D region center, delta is 2D width & height, view port is the usual OpenGL data structure.

Probably you have to multiply your projection matrix with the result or vice versa.

 

Looks like a simple scale and translate operation (But i have only weak understanding how projections matrices work, so i wonder it is that easy)

 

Yes, a pickMatrix-style approach is the simplest way of doing it. It is equivalent to just scaling (and potentially translating if the zoom isn't centered) your post-projection normalized device coordinates in the XY plane. And equivalent to computing a new perspective/ortho matrix with the same near and far values but top/left/bottom/right computed as required. As you render the exact same view as before but with part of it magnified, this is equivalent to approach number 3, except that you get the full screen/buffer resolution and don't get ugly scaling artifacts.

 

Doing it pickMatrix-style also means that you don't need separate versions for perspective and orthographic cameras (which you would if you chose to use fov/compute a new projection matrix from top/left/bottom/right).

 

Moving the camera definitely isn't a zoom at all. Although some applications do this kind of "dolly zoom" and just erroneously call it zoom.

 

I'm not sure what you mean by number 2. sounds as if you want to scale individual objects? That would lead to all sorts of issues (for example objects starting to interpenetrate after scaling).

Edited by l0calh05t

Share this post


Link to post
Share on other sites

 

"Put in on the screen Mr Sulu, magnification 50." - i'm working on SIMSpace (formerly SIMTrek)

 

Ha, ok :)

 

E.g. You hace 3D scene on a screen sized 1000 * 500, and you want to take a rectangle form pixels (4,5) to (90, 55) and this rectangle should be upscaled to fill the screen.

So it's about a area on the upper left of the view port.

By changing FOV you can only zomm to the center rgion of the screen.

To get the offset to the left and upwards you need to change POV as well ("Point of View", the "focus point" - hope that's the proper term).

Usually this point is exactly at the screen center, and mostly that's not mentioned further anywhere. But in your case you need to move this point even out of the screen.

 

I figured the math out years back and might still have the code, but first i'll try to find something on the net...

 

 

It looks like what you want is more of an off center projection matrix. Something like that, that way you get just a portion of the original render magnified.

// grid args in float for code readability and no cast, but are integer values
DirectX::XMMATRIX ComputeProjection( float fov, float aspect, float gridW, float gridH, float u, float v, float nearZ, float farZ) {
float horHalfTan = std::tan( fov / 2.f );
float verHalfTan = horHalfTan * aspectRatio;

float left = -horHalfTan + u * 2.f*horHalfTan / gridW;
float right = left + 2.f*horHalfTan / gridW;
float top = -verHalfTan + v * 2.f*verHalfTan / gridH;
float bottom = top + 2.f*verHalfTan / gridH;
return DirectX::XMMatrixPerspectiveOffCenterLH(left,right,top,bottom,nearZ,farZ);
}

Share this post


Link to post
Share on other sites

E.g. You hace 3D scene on a screen sized 1000 * 500, and you want to take a rectangle form pixels (4,5) to (90, 55) and this rectangle should be upscaled to fill the screen. So it's about a area on the upper left of the view port.

 

not quite.   you can't just scale in screen space, because all that does is magnify what is visible without magnification. it doesn't include things beyond the far clip plane which would be visible with magnification, or things inside the viewing frustum that scale to zero or less without magnification.    also, whether the camera is looking straight ahead or at at target, its should always scale with respect to the center of the screen, not the upper left edge of the viewing frustum. .

 

By changing FOV you can only zomm to the center rgion of the screen
 

 

that's what's required.    if the camera is looking at a klingon battlecruiser, as you zoom on the upper left section of the screen, the ship gets bigger and moves down and to the right off the screen.   as you zoom on the center of the screen, it gets bigger, but remains centered on the screen - which is the desired effect.

 

for this problem its best to think in terms of world space, not screen space. once you get to screen space, you've already lost data that should be there with magnification. so no screen space solution will work for all cases without workarounds.


And equivalent to computing a new perspective/ortho matrix with the same near and far values but top/left/bottom/right computed as required.

 

sounds a lot like changing FOV.... 

Share this post


Link to post
Share on other sites

you don't need separate versions for perspective and orthographic cameras

 

no ortho cameras required here.   not sure when one might want both.  perspective is generally considered superior to ortho for most things, so if you use perspective, there's little point in dropping back to ortho for some things, it just makes you ortho graphics look worse compared to your perspective graphics.   that's why a uniform level of graphics throughout a title is desirable. you don't want the good stuff making the bad stuff look even worse.


I'm not sure what you mean by number 2. sounds as if you want to scale individual objects? That would lead to all sorts of issues (for example objects starting to interpenetrate after scaling).

 

yes - i suspect that would be likely at extreme magnification levels. 

Share this post


Link to post
Share on other sites

i suspect that the answer - as usual - is way simpler than one might think:

 

you're looking at a planet, it fills half the screen.    cut the FOV in half, it seems twice as big (in diameter), and thus twice as close.

 

so say you have a base_FOV of 90 degrees (45 left and right). 

 

#define base_FOV  pi / 2.0f;

new_FOV = base_FOV / magnification;

set_FOV ( new_FOV );

 

and that's all there is to it.

Edited by Norman Barrows

Share this post


Link to post
Share on other sites

 

E.g. You hace 3D scene on a screen sized 1000 * 500, and you want to take a rectangle form pixels (4,5) to (90, 55) and this rectangle should be upscaled to fill the screen. So it's about a area on the upper left of the view port.

 

not quite.   you can't just scale in screen space, because all that does is magnify what is visible without magnification. it doesn't include things beyond the far clip plane which would be visible with magnification, or things inside the viewing frustum that scale to zero or less without magnification.    also, whether the camera is looking straight ahead or at at target, its should always scale with respect to the center of the screen, not the upper left edge of the viewing frustum. .

 

 

I don't follow your logic, maybe i have an other definition of magnification, but it seems a lot like "scaling" to me, especially with the idea described as taking a sub rectangle of the full image to showed it fullscreen. Offcentered projection matrix are definitely the way to go in that case. The nearZ/farZ is subsidiary, who does not use infinite far Z projection matrix nowaday these days ? And if your frustum culling is properly plug to your view + projection, then the question of smaller than a pixel object cull is not relevant either.

 

The offcentered projection effect can be seen as zooming/navigating in a picture that looks a lot like what the OP describe. 

Share this post


Link to post
Share on other sites

 

 

E.g. You hace 3D scene on a screen sized 1000 * 500, and you want to take a rectangle form pixels (4,5) to (90, 55) and this rectangle should be upscaled to fill the screen. So it's about a area on the upper left of the view port.

 

not quite.   you can't just scale in screen space, because all that does is magnify what is visible without magnification. it doesn't include things beyond the far clip plane which would be visible with magnification, or things inside the viewing frustum that scale to zero or less without magnification.    also, whether the camera is looking straight ahead or at at target, its should always scale with respect to the center of the screen, not the upper left edge of the viewing frustum. .

 

 

I don't follow your logic, maybe i have an other definition of magnification, but it seems a lot like "scaling" to me, especially with the idea described as taking a sub rectangle of the full image to showed it fullscreen. Offcentered projection matrix are definitely the way to go in that case. The nearZ/farZ is subsidiary, who does not use infinite far Z projection matrix nowaday these days ? And if your frustum culling is properly plug to your view + projection, then the question of smaller than a pixel object cull is not relevant either.

 

The offcentered projection effect can be seen as zooming/navigating in a picture that looks a lot like what the OP describe. 

 

 

I agree, that statement makes no sense at all. The far clip plane (if present at all) is completely unrelated and "things inside the viewing frustum that scale to zero or less without magnification" just do not exist.

 

@Norman: Maybe you are confused, because you believe we mean scaling the current view as a texture? That would obviously lead to loss of detail. We are actually talking about scaling in post-projection space to actually render a larger view of part of the screen in full resolution

Edited by l0calh05t

Share this post


Link to post
Share on other sites

Yep, there seems some confusion left...

 

is there a mathematical relation between FOV and apparent magnification?

#define base_FOV pi / 2.0f; new_FOV = base_FOV / magnification; set_FOV ( new_FOV );

 

Yes, but it depends on what you want to maginfy and where it is.

See this picture:

 

16jh8g3.jpg

 

You can see that doubling fov from 30 to 60 degrees does not make the black line half it's size, so your simple formula is not correct in general. It is only correct for points that all have the same distance from the camera.

E.g. if i want the fov to scale my view to fit the double sized grey line exactly, i belive it's this:

 

newFov = atan (x*2 / distance along view vector)

 

You would need to verify, but this formula works only for lines perpendicular to view direction, so it's no general solution either.

Just to mention - i'd stick to your formula, it's the best thing Mr. Sulu could do :)

 

However, imagine Mr spock points out something interesting on the surface of a planet, marks it with a small red rectangle which happens to be NOT at the center of the screen, and Kirk says 'scale it up, Scotty!'...

 

Then you need the 'picking' or 'Offcenter' methods we talked about.

Share this post


Link to post
Share on other sites

However, imagine Mr spock points out something interesting on the surface of a planet, marks it with a small red rectangle which happens to be NOT at the center of the screen, and Kirk says 'scale it up, Scotty!'...   Then you need the 'picking' or 'Offcenter' methods we talked about.

 

not if the visual scanners (the camera) are aimed at the target first! 

 

this will always be the case, so the target will always be in the center of the screen. so no offset projection required.  so simply playing with FOV should work just fine.  but from your diagram, its also apparent that the relation between FOV and apparent size is not linear. IE 1/2 FOV != 2x mag.

Share this post


Link to post
Share on other sites

its also apparent that the relation between FOV and apparent size is not linear.

 

Yes, but it must be still easy to calculate.

If we think of the black line above as the image projected to the near clip plane, that example might be better than i initially thought (brobably i was wrong with some statements).

It should help to find a general solotion independent of position or distance.

 

I think this works:

 

//float sizeAtDistOne = 1.0f;
        //float fov = atan (sizeAtDistOne / 1.0f); // but we don't wanna calc current fov - we already know, calc size instead:

        float fov = PI / 4.0f; // assuming you actually have 90 degrees fov, so a half angle of 45 degrees = pi/4 rad
        float sizeAtDistOne = tan(fov);

        float mag = 2.0f; // Kirks order
        float targetSize = sizeAtDistOne / mag;

        float targetFov = atan (targetSize / 1.0f);

        SystemTools::Log ("sizeAtDistOne: %f current fov: %f target fov for magnification of (%f): %f", sizeAtDistOne, fov / PI * 180 * 2, mag, targetFov / PI * 180 * 2);

 

I get this output:

 

sizeAtDistOne: 1.000000 current fov: 90.000003 target fov for magnification of (2.000000): 53.130102

 

I have tested this on my viewport. Changing fov from 90 to 53.13 ideed doubles the pixel diameter of my scene - exactly :)

Hopefully this is not just a special case for initial 90 degrees - i may have a bug somewhere.

Let me know if it works...

Share this post


Link to post
Share on other sites

 

its also apparent that the relation between FOV and apparent size is not linear.

 

Yes, but it must be still easy to calculate.

If we think of the black line above as the image projected to the near clip plane, that example might be better than i initially thought (brobably i was wrong with some statements).

It should help to find a general solotion independent of position or distance.

 

I think this works:

//float sizeAtDistOne = 1.0f;
        //float fov = atan (sizeAtDistOne / 1.0f); // but we don't wanna calc current fov - we already know, calc size instead:

        float fov = PI / 4.0f; // assuming you actually have 90 degrees fov, so a half angle of 45 degrees = pi/4 rad
        float sizeAtDistOne = tan(fov);

        float mag = 2.0f; // Kirks order
        float targetSize = sizeAtDistOne / mag;

        float targetFov = atan (targetSize / 1.0f);

        SystemTools::Log ("sizeAtDistOne: %f current fov: %f target fov for magnification of (%f): %f", sizeAtDistOne, fov / PI * 180 * 2, mag, targetFov / PI * 180 * 2);

I get this output:

 

sizeAtDistOne: 1.000000 current fov: 90.000003 target fov for magnification of (2.000000): 53.130102

 

I have tested this on my viewport. Changing fov from 90 to 53.13 ideed doubles the pixel diameter of my scene - exactly :)

Hopefully this is not just a special case for initial 90 degrees - i may have a bug somewhere.

Let me know if it works...

 

There's an even easier way to do it. Just scale the projection matrix in the XY plane. No need to calculate atan just to calculate the tan of that angle again while creating the perspective matrix.

Share this post


Link to post
Share on other sites

There's an even easier way to do it. Just scale the projection matrix in the XY plane. No need to calculate atan just to calculate the tan of that angle again while creating the perspective matrix.

 

haha, feeling really stupid now :D

awesome!

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this