I've a bounding sphere of say radius r. I want that this sphere occupies always more or less the same portion of space in the rendered image even if the camera change its position. To do so I've thought of changing dynamically the FOV according to the movement of the camera. Note that the camera is "looking at" the centre of the sphere.
Armed with pen and paper and my old trigonometric knowledges I've come up with the formula
FOV = 2 * arctan(radius/distance)
that was also confirmed after a bit of googling. This value is fed to glm::perspective.
distance = glm::distance(SphereCentre, cameraPosition)
I've doublechecked the centroid position and radius in world space. (directly on the data I have, I do not perform any transformation on my mesh)
My problem though is that what happens is not what I'm expecting, at all. First of all the portion of scene seen is much more than the wanted sphere, but also if the distance become lower the "sphere" go farther away, like a zooming out effect. I would expect this if changing just the FOV, but shouldn't this be countered by the usage of the above formulas?
EDIT: If I remove the 2* in front of the previous formula (2*arctan... ) it is sort of ok-ish if not for a terrible distortion I have when the distance is very small. Why is so? (both, why the distortion and why without the 2* it is sort of ok, I'm quite certain of my calculations)
Edited by Mchart, 06 July 2014 - 01:55 PM.